Search Results

Search found 9990 results on 400 pages for 'sampler state'.

Page 355/400 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • Custom initrd init script: how to create /dev/initctl

    - by Posco Grubb
    I have a virtual machine (VMM is Xen 3.3) equipped with two IDE HDD's (/dev/hda and /dev/hdb). The root file system is in /dev/hda1, where Scientific Linux 5.4 is installed. /dev/hdb contains an empty ext2 file system. I want to protect the root file system from writes by the VM by using aufs (AnotherUnionFS) to layer a writable file system on top of the root file system. The changes to / will be written to the file system located on /dev/hdb. (Furthermore, outside the VM, the file backing the /dev/hda will also be set to read-only permissions, so the VMM should also prevent the VM from modifying at that level.) (The purpose of this setup: be able to corrupt a virtual machine using software-implemented fault injection but preserve the file system image in order to quickly reboot the VM to a fault-free state.) How do I get an initrd init script to do the necessary mounts to create the union file system? I've tried 2 approaches: I've tried modifying the nash script that mkinitrd creates, but I don't know what setuproot and switchroot do and how to make them use my aufs as the new root. Apparently, nobody else here knows either. (EDIT: I take that back.) I've tried building a LiveCD (using linux-live-6.3.0) and then modifying the Bash /linuxrc script from the generated initrd, and I got the mounts correct, but the final /sbin/init complains about /dev/initctl. Specifically, my /linuxrc mounts the aufs at /union. The last few lines of /linuxrc effectively do the following: cd /union mkdir -p mnt/live pivot_root . mnt/live exec sbin/chroot . sbin/init </dev/console >/dev/console 2>&1 When init starts, it outputs something like init: /dev/initctl: No such file or directory. What is supposed to create this FIFO? I found no such filename in the original linuxrc and liblinuxlive scripts. I tried creating it via "mkfifo /dev/initctl", but then init complained about a timeout opening or writing to the FIFO. Would appreciate any help or pointers. Thanks.

    Read the article

  • Netdom to restore machine secret

    - by icelava
    I have a number of virtual machines that have not been switched on for over a month, and some others which have been rolled back to an older state. They are members of a domain, and have expired their machine secrets; thus unable to authenticate with the domain any longer. Event Type: Warning Event Source: LSASRV Event Category: SPNEGO (Negotiator) Event ID: 40960 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: The Security System detected an authentication error for the server ldap/iceland.icelava.home. The failure code from authentication protocol Kerberos was "The attempted logon is invalid. This is either due to a bad username or authentication information. (0xc000006d)". For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c000006d Event Type: Warning Event Source: LSASRV Event Category: SPNEGO (Negotiator) Event ID: 40960 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: The Security System detected an authentication error for the server cifs/iceland.icelava.home. The failure code from authentication protocol Kerberos was "The attempted logon is invalid. This is either due to a bad username or authentication information. (0xc000006d)". For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c000006d Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 3210 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: This computer could not authenticate with \\iceland.icelava.home, a Windows domain controller for domain ICELAVA, and therefore this computer might deny logon requests. This inability to authenticate might be caused by another computer on the same network using the same name or the password for this computer account is not recognized. If this message appears again, contact your system administrator. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c0000022 So I try to use netdom to re-register the machine back to the domain C:\Documents and Settings\Administrator>netdom reset tfs2008wdata /domain:icelava /UserO:enterpriseadmin /PasswordO:mypassword Logon Failure: The target account name is incorrect. The command failed to complete successfully. But have not been successful. I wonder what else needs to be done?

    Read the article

  • Preinstalled Windows 8 and Linux UEFI dual boot on a laptop

    - by itchy355
    I am trying to set up Windows 8 and Arch Linux on a new Sony Vaio E14 with preinstalled windows 8. So far: installed W8 to my new SSD (switched for the original HDD) using Recovery Media shrunk the W8 partition, deleted recovery partition, disabled swap confirmed W8 booting just fine On to Arch: disabled Secure Boot in bios confirmed W8 booting just fine Booted Arch off the CD and installed everything to 4th and 5th partition set up rEFInd for EFIstub kernel bootloader After that it got worse. I was unable to boot anything else than Windows 8 (although I was glad that they at least kept working just fine). Tried: creating EFI\refind\ and putting the .efi there (as per Arch manual overwriting EFI\boot\bootx64.efi overwriting EFI\Microsoft\Boot\bootmgr.efi overwriting EFI\Microsoft\Boot\bootmgfw.efi --- YAY rEFInd shown up! So far, so good. I've kept the whole W8 Boot\ directory in EFI\windows8 and set up a boot menuentry for it; and it booted just fine. But, upon restart, everything was wrong -- 'Operating system not found' instead of any bootloader (refind or w8). Booted back into Arch using the live CD to find out that the EFI partition had erroneous FAT table. fsck.vfat fixed it, and I've found that EFI\Microsoft\Boot was back to it's original state (all refind files deleted and replaced with W8 bootloaders). I've overwritten them again and got back to rEFInd showing up correctly and Arch being perfectly bootable. After that I've tried only renaming EFI\Microsoft\Boot\bootmgfw.efi to bootmgfw.001.efi (then copying refind's .efi to bootmgfw.efi and keeping EVERY OTHER file as it was), but with exactly the same result. Tried marking the GPT EFI partition as read-only, same result. Now I'm kinda out of luck. Arch boots fine, so does W8 but it destroys the EFI partition in the process. Thanks for any ideas, Googling brought me this far and I can't find any better. PS -- windows 8 MAYBE destroys the partition upon shutdown -- when I order a shutdown in W8, it takes unusually long (about half a minute instead of ~5 seconds). So in theory I could solve this by hard-resetting the laptop instead of a normal shutdown, but that's just not nice.

    Read the article

  • XP OEM licensing when reinstalling Windows XP

    - by mindas
    My wife has managed to buy a Dell laptop she was using at her ex-employer that just went bust. The problem with it is the OS (Windows XP) which takes ages to boot and is generally disproportionally slow to the hardware of the machine. So my aim is to sacrifice a day and reinstall it. The problem I am slightly worried about is the licensing/registration/activation hell. Apart from the sticker (with WinXP license key), the laptop has no other paperwork proving this license is legitimate. I believe this was originally an OEM license. Unfortunately, I don't have the the installation CD. This computer also has MS Office installed (which I would like to retain) but it none of MS Office apps would launch due to some obscure error complaining about lack of free disk space (which computer has plenty of). I have absolutely no clue what kind of license this MS Office was. And because the company has gone into the administration, there is no way of getting this information nor installable media. I believe that by buying the hardware I have also acquired the software which I can use as I see fit. Correct me if I'm wrong. Above said, my question would be: What is the easiest way of reinstalling the XP? By easiest I mean avoiding spending my time to prove Microsoft support I've got the right to use the software (insert your computer says noooo joke here) but still being able to get to fresh virgin activated legal state of the XP. I used to work as a sysadmin many years ago so I am not afraid of any technical difficulties. The same question applies to MS Office. I imagine the process would consist of backing up all the data, pulling some bits from the registry and using that on the fresh install. As for reinstall I'd expect to use some sort of OEM Windows repair CD from Dell, right? Are those freely available? My other box (HP) has such a thing and it can't be used on any other brand. I'm sure somebody had to go through this licensing hell and could share his/her tips. Thanks in advance.

    Read the article

  • Windows 7 laptop constantly dropping internet connection

    - by Corbin Holbert
    I have scoured the web for an answer and failed to find one. I am using a Toshiba Satellite laptop running Windows 7 64-Bit. I have the computer connected via Wifi. Now, I am no beginner with the Internet, or anything related to computers, as I have grown up teaching everyone around me how to use computers, and went to college for IT. Everything on my network works flawlessly at all times, except for this evil laptop. The worst part is that I fixed this once before a few years back and recently had to replace the hard drive and re-install the OS, but cannot for the life of me remember what I did to make this problem go away. I am in my browser, connected to the Internet. I click a link. Suddenly no internet access. All I do is click down on the WiFi connection in the task bar, disconnect and reconnect immediately. Internet is back the moment I hit "connect." I have read many people had the same issues as I am having, but they all had triggers or other network issues. I have no trigger (this happens literally five to six times per minute no matter what I am doing) and I have no problems with my router, modem, or any other devices or computers on said network. As I am a web designer, and like to test my work live at every turn- this is going to result in this laptop being in pieces if I can't get it fixed soon. If more info is needed, let me know and I will provide. Thanks for any help offered! EDIT: Network Card: Realtek RTL8188CE Wireless LAN 802.11n PCI-E NIC Network state reads as "No Internet Access" when the problem first occurs, then magically I have Internet access for about ten seconds once I disconnect and reconnect. I have turned off IPV6, I have turned off power saving options for the network adapter, no viruses. Any new ideas? Also, I had to disconnect and reconnect four times just to get to this edit screen- and will likely have to do the same just to post it- it's that bad.

    Read the article

  • How can I repair my USB drive?

    - by yurko
    USB drive is in read only state and I can't repair it. First of all I tried erase it using dd: root@yurko-laptop:/home/yurko-laptop# ls -l /dev/disk/by-id | grep usb lrwxrwxrwx 1 root root 9 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0-part1 -> ../../sdb1 root@yurko-laptop:/home/yurko-laptop# dd if=/dev/zero of=/dev/sdb dd: ?????? ? «/dev/sdb»: ?? ?????????? ????????? ????? 8257537+0 ??????? ??????? 8257536+0 ??????? ???????? ??????????? 4227858432 ????? (4,2 GB), 942,633 c, 4,5 MB/c After that I wanted to create new filesystem using fdisk: root@yurko-laptop:/home/yurko-laptop# fdisk /dev/sdb You will not be able to write the partition table. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 4227 MB, 4227858432 bytes 4 heads, 63 sectors/track, 32768 cylinders Units = cylinders of 252 * 512 = 129024 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 18 32768 4126596 b W95 FAT32 Command (m for help): fdisk showed that the partition still exists and I can't write the partition table. I tried to delete the existing partition: Command (m for help): d Selected partition 1 Command (m for help): w Unable to write /dev/sdb root@yurko-laptop:/home/yurko-laptop# Why am I not be able to write the partition table? Does it mean that some hardware failure occurred? And is it possible to repair the current USB drive? I've tried to use hdparm and it showed that the readonly flag is on: root@yurko-laptop:/home/yurko-laptop# hdparm /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 26 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 multcount = 0 (off) readonly = 1 (on) readahead = 256 (on) geometry = 1016/131/62, sectors = 8257536, start = 0

    Read the article

  • Cannot exit X server, restart, shutdown or drop to tty when VGA monitor active

    - by terdon
    I have a strange problem. If I connect an external VGA monitor to my laptop, exiting the X environment in any way crashes the computer. For example, say I am working with my two monitors (the laptop's and one connected to my VGA port) active. Hitting Ctrl+Alt+F Key should take me down to a tty. What actually happens is that the VGA screen goes blank, as you would expect, but the laptop screen, although still on, shows nothing. I know the screen is on because it is slightly more illuminated than when it is off. When in this state, I can do nothing to regain access to the machine. I have tried: Ctrl+F Key (and even Ctrl+Alt+F Key, just in case) combinations and none seem to have any effect. Ctrl+Alt+Del : Nothing Magic SysRq key: Nothing Blindly typing my username and password and trying to reboot/shutdown or restart GDM or MDM: Nothing The only thing that works is a hard reset. The exact same behavior occurs when kiling the X server through Ctrl+Alt+Backspace, rebooting or shutdown. There is no difference if I reboot/shutdown/log out using the WM's graphical menu or if I use the shutdown or rebootcommands. It is also not WM-dependent. I have the same problem using Cinnamon, Gnome 3, MATE and xfce4. It is, however, VGA dependent. I have tried connecting another VGA monitor and have the same problem. I do not, however, have this problem if a screen is connected to the DisplayPort. It is, therefore, a VGA specific issue. To make things even stranger, this only occurs when both screens are active. If either the laptop screen or the VGA monitor is inactive the problem goes away. Finally, this problem arose when I installed the latest Linux Mint Debian (LMDE). It did not occur with the previous release of LMDE. I am not sure what has changed since I used the same kernel version in both releases (I had upgraded the kernel while on the previous release) and, I think, the same nvidia drivers. Oh, and yes, I have updated the nvidia driver. Hardware: Dell M4500 laptop CPU: Intel Core i7 RAM: 8GB Graphics: nVidia GT216 [Quadro FX 880M] Software: LMDE, kernel 3.2.0-2-amd64 Xorg: 1.11.4 nVidia kernel: 295.20-1+3.2.9-1 Possibly relevant files: /var/log/Xorg.0.log ~/.xsession-errors Does anyone have any ideas how to fix this? Thanks in advance for any help.

    Read the article

  • SSD Fresh Does Not Start

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fress from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried shutting down the Spybot Resident and disabling the firewall and virus scanner with no luck. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

  • Cloudify: bootstrap-localcloud: operation failed?

    - by quanta
    OS: Gentoo, CentOS Version: 2.1.0 Follow the quick start guide, I got the below error when running bootstrap-localcloud: cloudify@default> bootstrap-localcloud STARTING CLOUDIFY MANAGEMENT 2012-05-30 14:55:50,396 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; \ Caused by: org.cloudifysource.shell.commands.CLIException: \ Error while starting agent. \ Please make sure that another agent is not already running. Operation failed. What port Cloudify is using to check that agent is running? PS: it's working fine when running on Windows. UPDATE: Wed May 30 22:37:30 ICT 2012 Reply to @tamirkorem and @Itai Frenkel: I'm pretty sure because this is the first time I run that command on 2 servers. More clearly, here're the output: cloudify@default> teardown-localcloud Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? 2012-05-30 22:43:33,145 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown failed. Failed to fetch the currently deployed applications list. For force teardown use the -force flag. Operation failed. cloudify@default> teardown-localcloud -force Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? Failed to fetch the currently deployed applications list. Continuing teardown-localcloud. .2012-05-30 22:46:39,040 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown aborted, an agent was not found on the local machine. Operation failed. and this one is the detailed result: cloudify@default> bootstrap-localcloud --verbose NIC Address=127.0.0.1 Lookup Locators=127.0.0.1:4172 Lookup Groups=localcloud Starting agent and management processes: gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 STARTING CLOUDIFY MANAGEMENT 2012-05-30 22:36:12,870 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; Caused by: org.cloudifysource.shell.commands.CLIException: Error while starting agent. Please make sure that another agent is not already running. Command executed: /usr/local/src/gigaspaces-cloudify-2.1.0-ga/bin/gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 Reply to @Eliran Malka: there is no such process listening on port 4172: # netstat --protocol=inet -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:9050 0.0.0.0:* LISTEN 2363/tor tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2331/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2293/cupsd

    Read the article

  • Tracking down rogue disk usage

    - by Amadan
    I found several other questions regarding the theory behind my problem (e.g. this, this), but I don't know how to apply the answers to my machine. # du -hsx / 11000283 / # df -kT / Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/csisv13-root ext4 516032952 361387456 128432532 74% / There is a big difference between 11G (du) and 345G (df). Where are the remaining 334G? It's not in deleted files. There was only one, it was short, and I truncated it just in case. This is what remains: # lsof -a +L1 / COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME zabbix_ag 4902 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4902 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4906 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4907 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4908 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4909 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 1w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) zabbix_ag 4910 zabbix 2w REG 252,0 0 0 28836028 /var/log/zabbix-agent/zabbix_agentd.log.1 (deleted) I rebooted to see if fsck does anything. But, from /var/log/boot.log, it seems there are no issues: /dev/mapper/server-root: clean, 3936097/32768000 files, 125368568/131064832 blocks Thinking maybe someone overzealously reserved root space, I checked the master record: # tune2fs -l /dev/mapper/server-root tune2fs 1.42 (29-Nov-2011) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 86430ade-cea7-46ce-979c-41769a41ecbe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131064832 Reserved block count: 6553241 Free blocks: 5696264 Free inodes: 28831903 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Feb 1 13:44:04 2013 Last mount time: Tue Aug 19 16:56:13 2014 Last write time: Fri Feb 1 13:51:28 2013 Mount count: 9 Maximum mount count: -1 Last checked: Fri Feb 1 13:44:04 2013 Check interval: 0 (<none>) Lifetime writes: 1215 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28836028 Default directory hash: half_md4 Directory Hash Seed: bca55ff5-f530-48d1-8347-25c004f66d43 Journal backup: inode blocks The system is: # uname -a Linux server 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Does anyone have any tips on what exactly to do to find and hopefully reclaim the missing space?

    Read the article

  • Blank displays after inactivity, mouse cursor not showing up

    - by Mike Christiansen
    I have a Windows 7 Enterprise x86 machine that is exhibiting some strange problems. After some inactivity (it varies how long it takes), when I come back to my computer, both of my monitors are black. The monitors are on, with a black display. Moving the mouse and pressing keys on the keyboard do not work. This happens at least once a day. When I first encountered the issue, I performed a hard shutdown and restarted the computer. About 10% of the time when I did this, the resolutions on my monitors were messed up. The native resolution on the primary monitor is 1600x900, and the native on my secondary is 1440x900. None of the widescreen resolutions would be present in the display properties. I had 800x600, 1024x768, and 1280x1024. As a workaround to this issue, I've found that if I plug in my secondary monitor as the primary monitor, and leave the secondary unplugged, then Windows will start in 1024x768 on the secondary. Then, I can use Windows 7's "Detect" feature and it finds the 1440x900 resolution, and sets it. Then I have to connect the primary monitor to the secondary port and set it as the primary. Then I can switch the two cables, putting the primary into the primary, the secondary into the secondary. Then use the "Detect" feature again, and it finds the correct resolutions. Some time after performing the above, I discovered that when the screens were blanked, I could simply press "Control+Alt+Delete", type in my password, and everything comes up fine - except my mouse cursor. All applications and features work as intended, except there is no mouse cursor (the mouse actually works though...). I have the "Show location of pointer when I press CTRL key" option selected, so I can press CTRL and use my mouse as normal - but I have no actual cursor. When the computer is in this state, UAC is also not functional. Whenever a UAC prompt appears, the screens go black (the original symptoms) and I have to press Escape to exit UAC. Because of the above symptoms (the UAC black screen thing, etc) I beleive winlogon.exe may be the culprit. However, I have no idea how to fix it. I am unable to restart winlogon.exe due to the problems I am having with winlogon.exe (the UAC black screen) Looking for any ideas.... More information Windows 7 x86 Enterprise Domain environment (I have a secondary account with administrator rights) Dell Optiplex 960 Cannot perform "optional" windows updates due to an activation issue (I am testing a windows 7 image, and an activation infrastructure has not been created yet. However, this issue was happening before I was unable to perform windows update, and windows was up to date at the time Updated video card driver (as an attempt to fixing the resolution issue) Disabled all power saving options Please let me know what else you need to help me solve this issue! Thanks in advance, Mike

    Read the article

  • Email can't be sent to my domain

    - by Jack W-H
    Hi Folks, Basically I have my domain howcode.com bought at DomainMonster.com. I have set it all up to point to MediaTemple nameservers and everything works - mostly - fine. I have registered an email address [email protected]. The setup is, I presume, working correctly. I can successfully send emails with the account. And I presume I can receive them - but the problem is, nobody can send them to me. Emailing from a regular, non-Googlemail account appears to work fine but it never arrives in the inbox. But when you email from a GoogleMail address, an error message is instantly returned saying this: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 relay not permitted (state 14). ----- Original message ----- Received: by 10.216.91.12 with SMTP id g12mr3673969wef.77.1271503997091; Sat, 17 Apr 2010 04:33:17 -0700 (PDT) Return-Path: Received: from [192.168.0.3] (client-81-98-94-79.cht-bng-014.adsl.virginmedia.net [81.98.94.79]) by mx.google.com with ESMTPS id x1sm29451927wbx.19.2010.04.17.04.33.15 (version=TLSv1/SSLv3 cipher=RC4-MD5); Sat, 17 Apr 2010 04:33:16 -0700 (PDT) From: Jack Webb-Heller Mime-Version: 1.0 (Apple Message framework v1078) Content-Type: multipart/alternative; boundary=Apple-Mail-7--1008464685 Subject: Re: Hi Date: Sat, 17 Apr 2010 12:33:14 +0100 In-Reply-To: <[email protected] To: Jack Webb-Heller References: <[email protected] Message-Id: <[email protected] X-Mailer: Apple Mail (2.1078) Does this work? On 17 Apr 2010, at 12:32, Jack Webb-Heller wrote: Hi I thought this may be something to do with my MX DNS settings. They are setup like so: MX name: howcode.com TTL: 43200 Type: MX Data: 10 mail.howcode.com. The A-Record for mail.howcode.com is setup like this: Name: mail.howcode.com TTL: 43200 Type: A Data: 205.186.187.129 Is this what's going wrong with the issue? Thanks very much Jack

    Read the article

  • Proxy / Squid 2.7 / Debian Wheezy 6.7 / lots of TCP Timed-out

    - by Maroon Ibrahim
    i'm facing a lot of TCP timed-out on a busy cache server and here below my sysctl.conf configuration as well as an output of "netstat -st" Kernel 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3 x86_64 GNU/Linux Any advice or help would be highly appreciated #################### Sysctl.conf cat /etc/sysctl.conf net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 65536 net.ipv4.tcp_low_latency = 1 net.core.wmem_max = 8388608 net.core.rmem_max = 8388608 net.ipv4.ip_local_port_range = 1024 65000 fs.aio-max-nr = 131072 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_intvl = 10 net.ipv4.tcp_keepalive_probes = 3 kernel.threads-max = 131072 kernel.msgmax = 32768 kernel.msgmni = 64 kernel.msgmnb = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.ip_forward = 1 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_dynaddr = 1 vm.swappiness = 0 vm.drop_caches = 3 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_ecn = 0 net.ipv4.tcp_max_orphans = 131072 net.ipv4.tcp_orphan_retries = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 32768 net.core.netdev_max_backlog = 131072 net.ipv4.tcp_mem = 6085248 16227328 67108864 net.ipv4.tcp_wmem = 4096 131072 33554432 net.ipv4.tcp_rmem = 4096 174760 33554432 net.core.rmem_default = 33554432 net.core.rmem_max = 33554432 net.core.wmem_default = 33554432 net.core.wmem_max = 33554432 net.core.somaxconn = 10000 # ################ Netstat results /# netstat -st IcmpMsg: InType0: 2 InType3: 233754 InType8: 56251 InType11: 23192 OutType0: 56251 OutType3: 437 OutType8: 4 Tcp: 20680741 active connections openings 63642431 passive connection openings 1126690 failed connection attempts 2093143 connection resets received 13059 connections established 2649651696 segments received 2195445642 segments send out 183401499 segments retransmited 38299 bad segments received. 14648899 resets sent UdpLite: TcpExt: 507 SYN cookies sent 178 SYN cookies received 1376771 invalid SYN cookies received 1014577 resets received for embryonic SYN_RECV sockets 4530970 packets pruned from receive queue because of socket buffer overrun 7233 packets pruned from receive queue 688 packets dropped from out-of-order queue because of socket buffer overrun 12445 ICMP packets dropped because they were out-of-window 446 ICMP packets dropped because socket was locked 33812202 TCP sockets finished time wait in fast timer 622 TCP sockets finished time wait in slow timer 573656 packets rejects in established connections because of timestamp 133357718 delayed acks sent 23593 delayed acks further delayed because of locked socket Quick ack mode was activated 21288857 times 839 times the listen queue of a socket overflowed 839 SYNs to LISTEN sockets dropped 41 packets directly queued to recvmsg prequeue. 79166 bytes directly in process context from backlog 24 bytes directly received in process context from prequeue 2713742130 packet headers predicted 84 packets header predicted and directly queued to user 1925423249 acknowledgments not containing data payload received 877898013 predicted acknowledgments 16449673 times recovered from packet loss due to fast retransmit 17687820 times recovered from packet loss by selective acknowledgements 5047 bad SACK blocks received Detected reordering 11 times using FACK Detected reordering 1778091 times using SACK Detected reordering 97955 times using reno fast retransmit Detected reordering 280414 times using time stamp 839369 congestion windows fully recovered without slow start 4173098 congestion windows partially recovered using Hoe heuristic 305254 congestion windows recovered without slow start by DSACK 933682 congestion windows recovered without slow start after partial ack 77828 TCP data loss events TCPLostRetransmit: 5066 2618430 timeouts after reno fast retransmit 2927294 timeouts after SACK recovery 3059394 timeouts in loss state 75953830 fast retransmits 11929429 forward retransmits 51963833 retransmits in slow start 19418337 other TCP timeouts 2330398 classic Reno fast retransmits failed 2177787 SACK retransmits failed 742371590 packets collapsed in receive queue due to low socket buffer 13595689 DSACKs sent for old packets 50523 DSACKs sent for out of order packets 4658236 DSACKs received 175441 DSACKs for out of order packets received 880664 connections reset due to unexpected data 346356 connections reset due to early user close 2364841 connections aborted due to timeout TCPSACKDiscard: 1590 TCPDSACKIgnoredOld: 241849 TCPDSACKIgnoredNoUndo: 1636687 TCPSpuriousRTOs: 766073 TCPSackShifted: 74562088 TCPSackMerged: 169015212 TCPSackShiftFallback: 78391303 TCPBacklogDrop: 29 TCPReqQFullDoCookies: 507 TCPChallengeACK: 424921 TCPSYNChallenge: 170388 IpExt: InBcastPkts: 351510 InOctets: -609466797 OutOctets: -1057794685 InBcastOctets: 75631402 #

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • Crashes and freezes after fixing "BOOTMGR is missing" error

    - by Greg-J
    I came back from a 3-day weekend to a computer that was off. I leave my PC on 24/7, so this was odd. Turn it on to get the dreaded "BOOTMGR is missing" screen. Two attempts at Windows Recovery and it booted into Windows fine. After an hour or so, I get a frozen Chrome and my start bar disappears. Ctrl+Alt+Del brings up an error box telling me that Ctrl+Alt+Del failed to work properly. Clicking on any open application triggers an error (I can't recall the error now, but it essentially just said that the application couldn't be found running or something along those lines). I restart, and again, the same thing happens after a while of use. I turn it on, install the 47 updates I have or so, and then restart it. After a while of use (under an hour), it just freezes completely. My thoughts are: SSDs, RAM or PS. My system specs below: (RAID0) 2 x Crucial M4 CT128M4SSD2 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CML16GX3M4A1600C9 CORSAIR HX Series HX750 750W ATX12V 2.3 / EPS12V 2.91 SLI Ready CrossFire Ready 80 PLUS GOLD Certified Modular Active 1 x ASUS Maximus IV Gene-Z/GEN3 LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard 1 x Hitachi GST Deskstar 7K1000.C 0F10383 1TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive 1 x Intel Core i7-2600K Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 1 x SAPPHIRE 21197-00-40G Radeon HD 7970 3GB 384-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card 1 x Noctua NH-D14 120mm & 140mm SSO CPU Cooler This is all crammed in a pretty small case (NZXT Vulcan) and has been running perfectly problem-free since January. The only thing out of the ordinary is that there is a fan in the case that is now making noise whereas the case has previously been completely silent. I have no reason to believe this is anything more then correlation, but felt it is worth mentioning. I believe it MAY be the SSDs simply because of the BOOTMGR error, but not sure how to test that theory. My belief that it may be the RAM is simply from experience with frozen machines. I haven't had the time to memtest it, but will. The PS being the culprit is something I've picked up by reading similar threads on various forums, and it seems plausible. I am unsure how to test this though. ANY insight whatsover would be greatly appreciated!

    Read the article

  • Disable Acer eRecovery system

    - by Joel Coehoorn
    The meat of this question is that I'm looking for a way to either require a password before using a recovery partition or "break" the recovery partition (specifically, Acer eRecovery) in a way that I can later "unbreak" only by booting normally into windows first. Here's the full details: I have a set of new Acer Veriton n260g machines in a computer lab. A lot work went into setting up this lab to work well - for example, Office 2007 and other programs needed by the students were installed, all windows updates are applied, and a default desktop is setup. All in all it's several hours work to fully set up one machine. Unfortunately, I don't currently have the ability to easily image these machines, and even if I did I would want to avoid downtime even while an image is restored. Therefore, I've taken steps to lock them down — namely DeepFreeze and a bios password to prevent booting from anywhere but the frozen hard drive. DeepFreeze is an amazing product — as long as you boot from the frozen hard drive, there is no way to actually make permanent changes to that hard drive. Anything you do is wiped after the machine restarts. It lets me give students the leeway to do what they want on lab computers without worrying about them breaking something. The problem is that even with the bios locked and set to only boot from the hard drive, these Acers still have a simple way to choose a different boot source: shut them down and put a paper click in a little hole at the top while you turn it on again. This puts them into the "Acer eRecovery" mode. This by itself is no big deal — you can still power cycle with no impact. But if you then click through the menu to reset the machine (we're now past the point of curiosity and on to intent) it will wipe the hard drive and restore it to the original state. Of course, a few students have already figured this out and reset a couple machines. That's unfortunate, but inevitable. I don't want to destroy the ability to do this entirely (which I could by repartitioning the drives to remove the recovery partition) but I would like a way to require a password first, or "break" the recovery system in a way that I can "unbreak" only if I first un-freeze the hard drive in DeepFreeze. Any ideas?

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • Forwarding RDP via a Linux machine using iptables: Not working

    - by Nimmy Lebby
    I have a Linux machine and a Windows machine behind a router that implements NAT (the diagram might be overkill, but was fun to make): I am forwarding RDP port (3389) on the router to the Linux machine because I want to audit RDP connections. For the Linux machine to forward RDP traffic, I wrote these iptables rules: iptables -t nat -A PREROUTING -p tcp --dport 3389 -j DNAT --to-destination win-box iptables -A FORWARD -p tcp --dport 3389 -j ACCEPT The port is listening on the Windows machine: C:\Users\nimmy>netstat -a Active Connections Proto Local Address Foreign Address State (..snip..) TCP 0.0.0.0:3389 WIN-BOX:0 LISTENING (..snip..) And the port is forwarding on the Linux machine: # tcpdump port 3389 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 01:33:11.451663 IP shieldsup.grc.com.56387 > linux-box.myapt.lan.ms-wbt-server: Flags [S], seq 94663035, win 8192, options [mss 1460], length 0 01:33:11.451846 IP shieldsup.grc.com.56387 > win-box.myapt.lan.ms-wbt-server: Flags [S], seq 94663035, win 8192, options [mss 1460], length 0 However, I am not getting any successful RDP connections from the outside. The port is not even responding: C:\Users\outside-nimmy>telnet example.com 3389 Connecting To example.com...Could not open connection to the host, on port 3389: Connect failed Any ideas? Update Per @Zhiqiang Ma, I looked at nf_conntrack proc file during a connection attempt and this is what I see (192.168.3.1 = linux-box, 192.168.3.5 = win-box): # cat /proc/net/nf_conntrack | grep 3389 ipv4 2 tcp 6 118 SYN_SENT src=4.79.142.206 dst=192.168.3.1 sport=43142 dport=3389 packets=6 bytes=264 [UNREPLIED] src=192.168.3.5 dst=4.79.142.206 sport=3389 dport=43142 packets=0 bytes=0 mark=0 secmark=0 zone=0 use=2 2nd update Got tcpdump on the router and it seems that win-box is sending an RST packet: 21:20:24.767792 IP shieldsup.grc.com.45349 > linux-box.myapt.lan.3389: S 19088743:19088743(0) win 8192 <mss 1460> 21:20:24.768038 IP shieldsup.grc.com.45349 > win-box.myapt.lan.3389: S 19088743:19088743(0) win 8192 <mss 1460> 21:20:24.770674 IP win-box.myapt.lan.3389 > shieldsup.grc.com.45349: R 721745706:721745706(0) ack 755785049 win 0 Why would Windows be doing this?

    Read the article

  • Can't set up printing from Mac OS X (10.5.7) to an HP PSC 2410 shared from PC running Ubuntu 9.10

    - by Weston C
    I've got an HP PSC 2410 printer shared from a fresh Ubuntu 9.10 installation. I'm able to send documents to this printer over the network from another Ubuntu machine. But so far, I haven't been able to find a setup where I can send documents to that printer from a MacBook running 10.5.7. On the Mac side, when setting things up, I go into System Prefs Print & Fax, click on the "+" mark, select "IP", pick "IPP", enter the IP address of the Ubuntu box, leave the queue blank, enter the Name and location, and I think it's when I get to the "Print Using" (driver selection) part that I'm running into issues. If I use "Auto Select", it defaults to "Generic PostScript Printer", which I doubt the PSC 2410 is (and sure enough, if I print, the jobs don't go through). If I try "Select a driver to use...", there's not an option for an HP PSC 2400. This seems a little odd: I can plug the printer directly into one of our Macs and it immediately figures out the driver and I can print no problem, but that's apparently the way things work. So, that leaves one option: "Other", which, when selected, brings up a dialog apparently for the purpose of manually locating a driver. I've tried visiting HP's web site. They have drivers for earlier versions of Mac OS X, but state that after 10.4, Mac OS X should just come with the relevant drivers. I've also tried setting things up by interacting with the CUPS server on the Mac through a browser: I go to http://localhost:631/, select "Add New Printer", pick "Internet Printing Protocol (http)" for the Device selection, enter "http://ubuntu.machine.ip.address:631/printers/hp-psc-2400-series" for the Device URI, select "HP" for Make, and then on the next screen, we're back to the problem where the PSC 2400 just doesn't show up. There's an option to "provide a PPD file", which I assume would be the printer driver I can't find. A Google search for "HP PSC 2410 ppd Leopard" doesn't seem to yield much other than a reminder that the printer is supposed to just work out of the box on Leopard. A local search for ".ppd" or "2410" on either Mac also doesn't yield anything that looks like a relevant print driver. I'm totally stuck at this point. Any advice?

    Read the article

  • APC File Cache not working but user cache is fine

    - by danishgoel
    I have just got a VPS (with cPanel/WHM) to test what gains i could get in my application with using apc file cache AND user cache. So firstly I got the PHP 5.3 compiled in as a DSO (apache module). Then installed APC via PECL through SSH. (First I tried with WHM Module installer, it also had the same problem, so I tried it via ssh) All seemed fine and phpinfo showed apc loaded and enabled. Then I checked with apc.php. All seemed OK But as I started testing my php application, the stats in apc for File Cache Information state: Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Which meant no PHP files were being cached, even though I had browsed through over 10 PHP files having multiple includes. So there must have been some Cached Files. But the user cache is functioning fine. User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 1000 Misses 1000 Request Rate (hits, misses) 0.84 cache requests/second Hit Rate 0.42 cache requests/second Miss Rate 0.42 cache requests/second Insert Rate 0.84 cache requests/second Cache full count 0 Its actually from an APC caching test script which tries to retrieve and store 1000 entries and gives me the times. A sort of simple benchmark. Can anyone help me here. Even though apc.cache_by_default = 1, no php files are being cached. This is my apc config Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 Also most php files are under 20KB, thus, apc.max_file_size = 1M is not the cause. I have also tried using 'apc_compile_file ' to force some files into opcode cache with no luck. I have also re-installed APC with Debugging enabled, but nothing shows in the error_log I have also tried setting mmap_file_mask to /dev/zero and /tmp/apc.xxxxxx, i have also set /tmp permissions to 777 to no avail Any clue anyone. Update: I have tried following things and none cause APC file cache to populate 1. set apc.enable_cli = 1 AND run a script from cli 2. Set apc.max_file_size = 5M (just in case) 3. switched php handler from dso to FastCGI in WHM (then switched it back to dso as it did not solve the problem) 4. Even tried restarting the container

    Read the article

  • OpenLDAP with StartTLS broken on Debian Lenny

    - by mr.zog
    I'm trying to get OpenLDAP on Lenny to work with StartTLS. I have a Fedora 13 machine which I'm using as a client for testing. So far the Fedora client is ignoring the 'host' directive in /etc/ldap.conf when I try to connect using ldapsearch. The client wants to connect to 127.0.0.1:389 even if I specify -H ldaps://server.name on when using ldapsearch. /etc/ldap.conf on the client machine is in mode 444. But even when I try connecting locally from an ssh session, I see errors like this: ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1) Someone hit me with a cluebat, plz. Update: you must use ~/.ldaprc for settings such as 'host'. Also, I just used nmap against the ldap server and it showed 636 and 389 in an open state. Here's what prints to screen when I try to connect with, ldapsearch -ZZ –x '(objectclass=*)'+ -d -1 ldap_create ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 192.168.10.41:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 192.168.10.41:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdb8 end=0x9bdbdd7 len=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ber_scanf fmt ({) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdbd end=0x9bdbdd7 len=26 0000: 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e w...1.3.6.1.4.1. 0010: 31 34 36 36 2e 32 30 30 33 37 1466.20037 ber_flush2: 31 bytes to sd 3 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_write: want=31, written=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_result ld 0x9bd3050 msgid 1 wait4msg ld 0x9bd3050 msgid 1 (infinite timeout) wait4msg continue ld 0x9bd3050 msgid 1 all 1 ** ld 0x9bd3050 Connections: * host: 192.168.10.41 port: 636 (default) refcnt: 2 status: Connected last used: Sun Jun 6 12:54:05 2010 ** ld 0x9bd3050 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x9bd3050 request count 1 (abandoned 0) ** ld 0x9bd3050 Response Queue: Empty ld 0x9bd3050 response count 0 ldap_chkResponseList ld 0x9bd3050 msgid 1 all 1 ldap_chkResponseList returns ld 0x9bd3050 NULL ldap_int_select read1msg: ld 0x9bd3050 msgid 1 all 1 ber_get_next ldap_read: want=8, got=0 ber_get_next failed. ldap_err2string ldap_start_tls: Can't contact LDAP server (-1)

    Read the article

  • Windows Setup could not configure Windows to run on this computer's hardware

    - by Hello71
    The whole installation goes smoothly up to the point of "Completing installation ...". The monitor changes resolution, after which a standard dialog box pops up saying Windows Setup could not configure Windows to run on this computer's hardware Then, in a few seconds, the whole machine powers down. Trying to restart produces the message: STOP: c000021a {Fatal System Error} 0x00000000 (0xc0000001 0x00100448) OR it boots into Setup and comes up with the message: Windows Setup encountered an unexpected error... (This is not the actual error, just paraphrasing) I tried using the OEM restore instead of a regular install, but it fails with the same error. (Even though it worked before...) General specs: HP Pavilion Elite e9262f Intel Core i5-750 Processor ATI Radeon HD 4650 Hitachi HDT721010SLA360 ATA Device 6GB DDR3 RAM SuperMulti DVD Burner with LightScribe Some built-in Wi-Fi module http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01916917 I've tried disconnecting the wireless card and disabling the built-in Ethernet and Firewire via the BIOS, and replacing the wireless keyboard and mouse with wired USB ones. Didn't work. I've also tried changing the SATA controller settings in the BIOS to RAID, AHCI, and IDE, reinstalling each time I changed. Still not working. I think the reason why it is showing the Fatal System Error is because it didn't finish installing before it errored out and shut down, so the system is left in an inconsistent state. I've tried 3 different copies (including the OEM restore) of Windows 7 now, and they're all failing at the same point, with the same error message. I've tried to install Windows 7 maybe 10 times already, with the exact same error message at the exact same location. Hm... Interestingly, the 32-bit version of Windows 7 works, but the 64-bit version doesn't. Perhaps it was a badly burned disk? Reburning the 64-bit version still comes up with the same error. Here's a picture of the side of the case that clearly says it came with Windows 7 64-bit, along with the model number and CPU. sudo fdisk -l: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009896f Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 7 HPFS/NTFS /dev/sda2 14 94119 755906445 7 HPFS/NTFS /dev/sda3 119922 121602 13492224 7 HPFS/NTFS /dev/sda4 94120 119922 207257740+ 5 Extended /dev/sda5 119527 119922 3170769 82 Linux swap / Solaris /dev/sda6 107174 119526 99225441 83 Linux /dev/sda7 94120 107173 104856192 7 HPFS/NTFS Partition table entries are not in disk order

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >