Search Results

Search found 23079 results on 924 pages for 'local variables'.

Page 539/924 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • A faulty Caviar Blue hard drive?

    - by Glister
    We have a small "homemade" server running fully updated Debian Wheezy (amd64). One hard drive installed: WDC WD6400AAKS. The motherboard is ASUS M4N68T V2. The usual load: CPU: an average of 20% Each week about 50GB of additional space is occupied. About 47GB of uploaded files and 3GB of MySQL data. I'm afraid that the hard drive may be about to fail. I saw Pre-fail on few places when I ran: root@SERVER:/tmp# smartctl -a /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA Device Model: WDC WD6400AAKS-XXXXXXX Serial Number: WD-XXXXXXXXXXXXXXXXXXX LU WWN Device Id: 5 0014ee XXXXXXXXXXXXX Firmware Version: 01.03B01 User Capacity: 640,135,028,736 bytes [640 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Oct 28 18:55:27 2013 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x85) Offline data collection activity was aborted by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 247) Self-test routine in progress... 70% of test remaining. Total time to complete Offline data collection: (11580) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 136) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 157 146 021 Pre-fail Always - 5108 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2968 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 079 079 000 Old_age Always - 15445 10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2950 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 426 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2968 194 Temperature_Celsius 0x0022 111 095 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 160 000 Old_age Always - 21716 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15444 - Error SMART Read Selective Self-Test Log failed: scsi error aborted command Smartctl: SMART Selective Self Test Log Read Failed root@SERVER:/tmp# In one tutorial I read that the pre-fail is a an indication of coming failure, in another tutorial I read that it is not true. Can you guys help me decode the output of smartctl? It would be also nice to share suggestions what should I do if I want to ensure data integrity (about 50GB of new data each week, up to 2TB for the whole period I'm interested in). Maybe I will go with 2x2TB Caviar Black in RAID4?

    Read the article

  • ssh keys rejected each day

    - by EddyR
    I've had OpenSSH server running on my debian server for a couple weeks and all of a sudden now when I go to login the next day it rejects my ssh key and I have to manually add a new one each time. Not only that but I have the "tunneling with clear-text passwords" option enabled and the non-root (login with root is disabled) account for that is rejected too. I'm at a loss why this is happening and I can't find any ssh options that would explain it. --update-- I just changed debug level to DEBUG. But before that I'm seeing a lot of the following in auth.log Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session opened for user root by (uid=0) Feb 1 04:23:01 greenpages CRON[7213]: pam_unix(cron:session): session closed for user root ... Feb 1 04:36:26 greenpages sshd[7217]: reverse mapping checking getaddrinfo for nat-pool-xx-xx-xx-xx.myinternet.net [xx.xx.xx.xx] failed - POSSIBLE BREAK-IN ATTEMPT! ... Feb 1 04:37:31 greenpages sshd[7223]: Did not receive identification string from xx.xx.xx.xx ... My sshd_conf file settings are: # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port xxx # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel DEBUG # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no ClientAliveInterval 60 AllowUsers myuser

    Read the article

  • Acronis Disk Director AFTER Clone Disk error: PXE-E61: Media test failure, check cable

    - by Kairan
    Used Acronis Disk Director on my desktop, plugged in the laptop drive 240GB SSD (USB) and the new hard drive 500GB SSD (usb) and the copy seemed to be fine. I didnt see any error messages but I didnt stare at it for 3 hours either. The clone disk of course the Toshiba hidden restore partition, the primary partition C drive and the active (boot?) partition and yes, did check box for copy NT signature. The computer boots up fine most of the time, but it seems that when the computer goes to sleep (i believe its sleep, hard to do much testing during school) or hibernate or reboot it will sometimes display this message: Intel(R) Boot Agent GE v1.3.52 Copyright (C) 1997-2010, Intel Corporation PXE-E61: Media test failure, check cable PXE-M0F: Exiting Intel Boot Agent Insert system disk in drive. Press any key when ready... Of course any key does nothing but repeat a similar method. However, if I press the power button on the laptop (Toshiba Portege R705, Win 7 Pro 64-bit) it puts computer into hibernate. After hibernating I press power button again and it comes out of hibernation without any odd messages or problems described above... so apparently that is my TEMP fix. Another recent issue I noticed is on occasion when creating a new folder or modifying something in the system variables, other random areas I will get a message: "The Stub received bad data" and simply retry the task and it works. Perhaps these two issues are linked.

    Read the article

  • Alternate way to connect a vpn through a MIFI

    - by questor
    This has gotten to be a major problem at our company and depending on who I ask, the problem either does not really exist (mfr. and vendor) or is insoluble ( according to most users including techs who know how to prove their point). The problem involves getting a normal Windows 7 system to connect to a normal Server 2008 R2 Server over a cellular router (usually called a Mifi). A very few brands/models appear to work but the majority cannot make the connection. Since it is a cellular device, there are many variables that come into play and I wondered if anyone had ever found a consistent way to either make one work or else prove to the providers that their equipment was at fault. They all specifically state “VPN use” on the sales brochures. But few if any work. And those that do are not reliable. From a standpoint of pure knowledge, I just wondered if anyone knew the real reason why they fail? Pptp, L2tp, IPsec doesn’t matter. I have not tried Shrew or OpenVPN and am using strictly MS Windows protocols. Plenty of Google Searches back up my complaints but none seem to be any closer to knowing "why" they fail, just that they do. This is a "quest for knowledge"question. I don't expect a solution. Just a reason for the problem if anyone has any ideas.

    Read the article

  • Encoding over SSH Issues

    - by user1104160
    I have a Linux machine and a Windows machine, both using Vim with the Powerline plugin. They both work great with patched fonts. Next, I want to SSH onto an OSX 10.6 machine and also use the Powerline in the terminal with Vim. However, I get weird symbols with normal mode ("^^B" in one area) and fancy mode ("~@" and "~B" spread throughout the bar. I thought this mixup was an encoding issue, but when I look at Putty's encoding it is using UTF-8 and the same with the Ubuntu terminal. Additionally, on the OSX machine, "locale" returns "en_US.UTF-8" for all variables (I set it to do that in order to troubleshoot). However, the symbols are still showing. I am using a patched font (Inconsolata, the same one as the Ubuntu terminal) for the OSX terminal, so I am stumped. Is there a missing component to this equation? Are there additional problems that can arise from SSH encoding? On the OSX end, additionally, these same symbols appear, so it may not even be related to SSH and therefore I'm totally lost.

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

  • Prolific USB-to-Serial Comm Port significantly slower under Windows 7 comparing to Windows XP

    - by Dmitry S
    Not sure if this question should be asked here or on SuperUser but if we get an answer here it may be useful for others here I am using a Prolific USB-to-Serial adapter based on the Prolific chip to use with a device on serial port. I have the latest version of the driver installed: 1.3.0 (2010-7-15). When I use my device with this adapter on my main Windows 7 (32bit) system it takes 8-9 seconds to send a command through to the device. However, when I do the same thing on a different Windows XP system (an old laptop I borrowed for testing) it only takes 2-3 seconds. I have made sure that the port settings and other variables are the same between systems. I also tested on a third laptop (also running Windows 7) and again got a significant delay. So the question is if anyone else experienced the same problem and found a solution. I would like to avoid moving to an XP system for what I need to achieve so that's my last option. Thanks in advance.

    Read the article

  • Is it possible to get CCM Updates Schedule using Powershel or VBScript?

    - by frogman
    I want to be able to check the CCM Updates Schedule as seen in Configuration Manager Updates tab. I've been looking around on google and I've not been able to find a consistent answer to this. I tried to create a COM object using UDA.CCMUpdatesDeployment. This allows me to successfully set the recurring schedule with SetUserDefinedSchedule method. If I try to use GetUserDefinedSchedule I only get the original values of the variables. PS> $UD = New-Object -com "UDA.CCMUpdatesDeployment" PS> $A= 101 PS> $B= 102 PS> $UD.GetUserDefinedSchedule([ref]$A, [ref]$B) PS> $A 101 PS> $B 102 PS> $UD.GetUserDefinedSchedule MemberType : Method OverloadDefinitions : {void GetUserDefinedSchedule (Variant, Variant)} TypeNameOfValue : System.Management.Automation.PSMethod Value : void GetUserDefinedSchedule (Variant, Variant) Name : GetUserDefinedSchedule IsInstance : True I actually want to be able to do this remotely for a list of servers in a text file but right now any way would do.

    Read the article

  • Samba4 [homes] share

    - by SambaDrivesMeCrazy
    I am having issues with the [homes] share. OS is Ubuntu 12.04. I've installed samba 4.0.3, bind9 dlz, ntp, winbind, everything but pam modules, and did all the tests from https://wiki.samba.org/index.php/Samba_AD_DC_HOWTO. Running getent passwd and getent user work just fine. Creating a simple share works just fine too. I can manage the users, GPOs, and DNS from the windows mmc snap-ins. I can join winxp,7,8 to the domain and log on perfectly. I can change my passwords from windows, etc..etc.. I could say that everything is fine and be happy :) buuuut, no, home directories do not work. Searching in here, and on our good friend google I gathered that a simple [homes] read only = no path = /storage-server/users/ and mapping the user's home folder in dsa.msc to \\server-001\username or \\server-001\homes should get me a home share I could map for my user homedir. But the snap-in give me an error saying that it cannot create the home folder because the network name has not been found (rough translation from portuguese). also, running root@server-001:/storage-server/users# smbclient //server-001/test -Utest%'12345678' -c 'ls' Domain=[MYDOMAIN] OS=[Unix] Server=[Samba 4.0.3] tree connect failed: NT_STATUS_BAD_NETWORK_NAME Server name is alright, if I go for a simple share on the same server it opens just fine. If I map the user homedir to this simple share it works. What I want is that I dont have to go and manually make a new folder on linux everytime I create a new user on windows. It looks like permissions but I cant find any documentation on this (yes I've tried the manpages, but its hard to tell with so many options on man smb.conf alone). My smb.conf right now looks like this (pretty simple I know) # Global parameters [global] workgroup = MYDOMAIN realm = MYDOMAIN.LAN netbios name = SERVER-001 server role = active directory domain controller server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate [netlogon] path = /usr/local/samba/var/locks/sysvol/mydomain.lan/scripts read only = No [sysvol] path = /usr/local/samba/var/locks/sysvol read only = No [homes] read only = no path = /storage-server/users Folder permissions /storage-server drwxr-xr-x 6 root root 4096 Fev 15 15:17 storage-server /storage-server/users drwxrwxrwx 6 root root 4096 Fev 18 17:05 users/ Yes, I was desperate enough to set 777 on the users folder... not proud of it. Any pointers in the right direction would be very welcome. Edited to include: root@server-001:/# wbinfo --user-info=test MYDOMAIN\test:*:3000045:100:test:/home/MYDOMAIN/test:/bin/false root@server-001:/# wbinfo -n test S-1-5-21-1957592451-3401938807-633234758-1128 SID_USER (1) root@server-001:/# id test uid=3000045(MYDOMAIN\test) gid=100(users) grupos=100(users) root@server-001:/# wbinfo -U 3000045 S-1-5-21-1957592451-3401938807-633234758-1128 root@server-001:/# Edit 2: getent passwd | grep test MYDOMAIN\test:*:3000045:100:test:/home/MYDOMAIN/test:/bin/false I have no idea how to change that home folder to /storage-server/users/test so I just went and ln -s /storage-server/users /home/MYDOMAIN just in case. still, no changes, same errors. Edit 3 On log.smbd I get the following error when trying to set the test user home folder to \server-001\test [2013/02/20 14:22:08.446658, 2] ../source3/smbd/service.c:418(create_connection_session_info) user 'MYDOMAIN\Administrator' (from session setup) not permitted to access this share (test)

    Read the article

  • Secure method of changing a user's password via Python script/non-interactively

    - by Matthew Rankin
    I've created a Python script using Fabric to configure a freshly built Slicehost Ubuntu slice. In case you're not familiar with Fabric, it uses Paramiko, a Python SSH2 client, to provide remote access "for application deployment or systems administration tasks." One of the first things I have the Fabric script do is to create a new admin user and set their password. Unlike Pexpect, Fabric cannot handle interactive commands on the remote system, so I need to set the user's password non-interactively. At present, I'm using the chpasswd command to change the password. This transmits the password as clear text over SSH to the remote system. Questions Is my current method of setting the password a security concern? Currently, the drawback I see is that Fabric shows the password as clear text on my local system as follows: [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd. Since I only run the Fabric script from my laptop, I don't think this is a security issue, but I'm interested in others' input. Is there a better method for setting the user's password non-interactively? Another option, would be to use Pexpect from within the Fabric script to set the password. Current Code # Fabric imports and host configuration excluded for brevity root_password = getpass.getpass("Root's password given by SliceManager: ") admin_username = prompt("Enter a username for the admin user to create: ") admin_password = getpass.getpass("Enter a password for the admin user: ") env.user = 'root' env.password = root_password # Create the admin group and add it to the sudoers file admin_group = 'admin' run('addgroup {group}'.format(group=admin_group)) run('echo "%{group} ALL=(ALL) ALL" >> /etc/sudoers'.format( group=admin_group) ) # Create the new admin user (default group=username); add to admin group run('adduser {username} --disabled-password --gecos ""'.format( username=admin_username) ) run('adduser {username} {group}'.format( username=admin_username, group=admin_group) ) # Set the password for the new admin user run('echo "{username}:{password}" | chpasswd'.format( username=admin_username, password=admin_password) ) Local System Terminal I/O $ fab config_rebuilt_slice Root's password given by SliceManager: Enter a username for the admin user to create: johnsmith Enter a password for the admin user: [xxx.xx.xx.xxx] run: addgroup admin [xxx.xx.xx.xxx] out: Adding group `admin' (GID 1000) ... [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "%admin ALL=(ALL) ALL" >> /etc/sudoers [xxx.xx.xx.xxx] run: adduser johnsmith --disabled-password --gecos "" [xxx.xx.xx.xxx] out: Adding user `johnsmith' ... [xxx.xx.xx.xxx] out: Adding new group `johnsmith' (1001) ... [xxx.xx.xx.xxx] out: Adding new user `johnsmith' (1000) with group `johnsmith' ... [xxx.xx.xx.xxx] out: Creating home directory `/home/johnsmith' ... [xxx.xx.xx.xxx] out: Copying files from `/etc/skel' ... [xxx.xx.xx.xxx] run: adduser johnsmith admin [xxx.xx.xx.xxx] out: Adding user `johnsmith' to group `admin' ... [xxx.xx.xx.xxx] out: Adding user johnsmith to group admin [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd [xxx.xx.xx.xxx] run: passwd --lock root [xxx.xx.xx.xxx] out: passwd: password expiry information changed. Done. Disconnecting from [email protected]... done.

    Read the article

  • CentOS 6.2 Bridge Setup for KVM

    - by Gaia
    I'm trying to set up bridged networking with KVM on CentOS 6.2 to no avail. There are plenty of docs and tutorials about it, but they all seem to conflict or don't provide info specific enough to my situation. I just don't get it. I access the host via public IP "xxx.xxx.128.58". All other available IPs (/29) should be bridged and made available to the only KVM guest (running a public facing LAMP stack) that will be setup on this machine. The amazingly unhelpful NOC people assigned the extra IPs to eth1. Is this correct? Should br0 bridge to eth0 or eth1? How do I set this up? Here is the relevant info: eth0 Link encap:Ethernet HWaddr 00:25:90:68:FE:BC inet6 addr: fe80::225:90ff:fe68:febc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:763 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:550811 (537.9 KiB) TX bytes:648 (648.0 b) Memory:fb980000-fba00000 eth1 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.58 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 inet6 addr: fe80::225:90ff:fe68:febd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1806 errors:0 dropped:0 overruns:0 frame:0 TX packets:1505 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:133166 (130.0 KiB) TX bytes:106070 (103.5 KiB) Memory:fb900000-fb980000 eth1:0 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.59 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:1 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.60 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:2 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.61 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:3 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.62 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) virbr0 Link encap:Ethernet HWaddr 52:54:00:62:55:68 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) > cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=XXXX.domain.com > brctl show bridge name bridge id STP enabled interfaces br0 8000.00259068febc no eth0 virbr0 8000.525400625568 yes virbr0-nic > ls -fl | grep ifcfg -rw-r--r-- 1 root root 198 Jun 7 10:58 ifcfg-eth0 -rw-r--r--. 1 root root 254 Oct 7 2011 ifcfg-lo -rw-r--r-- 1 root root 77 Jun 6 18:51 ifcfg-eth1-range0 -rw-r--r-- 1 root root 168 Jun 6 18:50 ifcfg-eth1 > cat ifcfg-eth0 DEVICE="eth0" BOOTPROTO="static" BRIDGE="br0" HWADDR="00:25:90:68:FE:BC" IPV6INIT="yes" MTU="1500" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" IPADDR="yyy.yyy.216.131" NETMASK="255.255.255.128" > cat ifcfg-eth1 DEVICE="eth1" HWADDR="00:25:90:68:FE:BD" NM_CONTROLLED="yes" ONBOOT="yes" BOOTPROTO="static" IPADDR="xxx.xxx.128.58" NETMASK="255.255.255.248" GATEWAY="xxx.xxx.128.57" > cat ifcfg-eth1-range0 IPADDR_START="xxx.xxx.128.59" IPADDR_END="xxx.xxx.128.62" CLONENUM_START="0" Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xxx.xxx.128.56 * 255.255.255.248 U 0 0 0 eth1 192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0 link-local * 255.255.0.0 U 1003 0 0 eth1 default xxx.xxx.128.57 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Problems with image/file upload in MediaWiki on Windows 2008 Server R2, using wrong temp directory

    - by Lasse V. Karlsen
    I have installed MediaWiki 1.15.2 under IIS as per the MediaWiki installation instructions for Windows 2008 Server. I have configured PHP to use a specific temp directory: upload_tmp_dir="C:\php\uploadtemp" I have specified that MediaWiki is allowed to upload: $wgEnableUploads = true; But when I try to upload an image, I get this error message in my browser: Internal error Could not find file "C:\Windows\Temp\php1AEA.tmp". Retrying will simply give me a new filename, but in the same location. The directory does not have any php* files in it, but since they're "temporary", they might be gone in a flash before Windows Explorer is able to show them so that might be a red herring. I've googled for this, and the most promising lead I found was on this page: Image upload problem - Is this bug fixed?, but since the text says "a bugfix was posted on the bug-report page", but provides no link to which bug page this relates to (php or mediawiki) nor the actual bug report, I've not found conclusively the bug report in question so that didn't help me much. Lots of pages indicates that this is a permission issue, so I tried setting permissions on c:\windows\temp as Modify by Everyone, still no dice. I tried changing the two system environment variables TEMP and TMP to point to C:\Temp instead, but MediaWiki still complains about not finding the file in C:\Windows\Temp. Note that I don't care a lot about where the files will actually be stored temporarily, so c:\windows\temp is fine by me. I do, however, care about them actually being uploaded correctly. Does anyone know of a fix, have any leads I can follow, or whatnot? The server is running Windows 2008 Server R2, all patches installed, and the PHP installed is 5.3.2, using IIS FastCGI.

    Read the article

  • How to access remote lan machines through a ipsec / xl2ptd vpn (maybe iptables related)

    - by Simon
    I’m trying to do the setup of a IPSEC / XL2TPD VPN for our office, and I’m having some problems accessing the remote local machines after connecting to the VPN. I can connect, and I can browse Internet sites trough the VPN, but as said, I’m unable to connect or even ping the local ones. My Network setup is something like this: INTERNET eth0 ROUTER / VPN eth2 LAN These are some traceroutes behind the VPN: traceroute to google.com (173.194.78.94), 64 hops max, 52 byte packets 1 192.168.1.80 (192.168.1.80) 74.738 ms 71.476 ms 70.123 ms 2 10.35.192.1 (10.35.192.1) 77.832 ms 77.578 ms 77.865 ms 3 10.47.243.137 (10.47.243.137) 78.837 ms 85.409 ms 76.032 ms 4 10.47.242.129 (10.47.242.129) 78.069 ms 80.054 ms 77.778 ms 5 10.254.4.2 (10.254.4.2) 86.174 ms 10.254.4.6 (10.254.4.6) 85.687 ms 10.254.4.2 (10.254.4.2) 85.664 ms traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 * * * 2 *traceroute: sendto: No route to host traceroute: wrote 192.168.1.3 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote 192.168.1.3 52 chars, ret=-1 * traceroute: sendto: Host is down 3 traceroute: wrote 192.168.1.3 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote 192.168.1.3 52 chars, ret=-1 These are my iptables rules: iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # allow lan to router traffic iptables -A INPUT -s 192.168.1.0/24 -i eth2 -j ACCEPT # ssh iptables -A INPUT -p tcp --dport ssh -j ACCEPT # vpn iptables -A INPUT -p 50 -j ACCEPT iptables -A INPUT -p ah -j ACCEPT iptables -A INPUT -p udp --dport 500 -j ACCEPT iptables -A INPUT -p udp --dport 4500 -j ACCEPT iptables -A INPUT -p udp --dport 1701 -j ACCEPT # dns iptables -A INPUT -s 192.168.1.0/24 -p tcp --dport 53 -j ACCEPT iptables -A INPUT -s 192.168.1.0/24 -p udp --dport 53 -j ACCEPT iptables -t nat -A POSTROUTING -j MASQUERADE # logging iptables -I INPUT 5 -m limit --limit 1/min -j LOG --log-prefix "iptables denied: " --log-level 7 # block all other traffic iptables -A INPUT -j DROP And here are some firewall log lines: Dec 6 11:11:57 router kernel: [8725820.003323] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=192.168.1.3 LEN=60 TOS=0x00 PREC=0x00 TTL=255 ID=62174 PROTO=UDP SPT=61910 DPT=53 LEN=40 Dec 6 11:12:29 router kernel: [8725852.035826] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=15344 PROTO=UDP SPT=56329 DPT=8612 LEN=24 Dec 6 11:12:36 router kernel: [8725859.121606] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=11767 PROTO=UDP SPT=63962 DPT=8612 LEN=24 Dec 6 11:12:44 router kernel: [8725866.203656] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=11679 PROTO=UDP SPT=57101 DPT=8612 LEN=24 Dec 6 11:12:51 router kernel: [8725873.285979] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=39165 PROTO=UDP SPT=62625 DPT=8612 LEN=24 I’m pretty sure that the problem should be related with iptables, but after trying a lot of different confs, I was unable to find the right one. Any help will be greetly appreciated ;). Kind regards, Simon. EDIT: This is my route table: default 62.43.193.33.st 0.0.0.0 UG 100 0 0 eth0 62.43.193.32 * 255.255.255.224 U 0 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth2 192.168.1.81 * 255.255.255.255 UH 0 0 0 ppp0

    Read the article

  • Mysql can not resolve hostnames when checking privileges

    - by Fabio
    I'm going crazy to solve this. I have a mysql installation (on machine db.example.org) which doesn't resolve a given hostname. I gave privileges using hostnames i.e. GRANT USAGE ON *.* TO 'user'@'host1.example.org' IDENTIFIED BY PASSWORD 'secret' GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, INDEX ON `my_database`.* TO 'user'@'host1.example.org' However when I try to connect using mysql -u user -p -h db.example.org I obtain ERROR 1045 (28000): Access denied for user 'user'@'192.168.11.244' (using password: YES) I already checked for correct name resolution in the dns system: $ dig -x 192.168.11.244 ;; ANSWER SECTION: 244.11.168.192.in-addr.arpa. 68900 IN PTR host1.example.org. I've also checked for skip-name-resolve option in mysql variables in fact if I can access from another machine on the same subnet using hostname privileges. The only difference is that host1.example.org and db.example.org point the same ip on the same machine i.e. both db.example.org and host1.example.org have ip 192.168.11.244. In this way all the applications using that database can use the name db.example.org and we can move the data on other hosts (if needed) just by changing the dns record, leaving the application code unchanged. What should I do to solve this or at least to understand what's happening?

    Read the article

  • Body of email breaks distribution list in exchange?

    - by widgisoft
    Hi, I have a very odd problem that I'm not sure is a programming issue or a server issue :-p. Basically I'm sending an email to an exchange distribution list that includes a PHP stack trace; during certain faults the trace includes really high level information such as the machine's environment variables (during file reads, etc.). I went through a copy of the email line by line until the email sent and it appears the line: [SUDO_COMMAND] => /etc/init.d/httpd restart is the culprit. Adding a string replacement in before the email is sent allows a successful send. What I don't understand is WHY these stream of characters are causing the issue ONLY on the distribution email. If I send the email to myself as well, i.e. "[email protected]; [email protected]", then I get the email fine. Re-ordering the list doesn't make a difference the group never gets the email. Because the individual gets the email and not the group I'm assuming the fault is with exchange and some rogue filtering - I've gone through it with the sysadmins and there's no filtering of any sort on that group... so maybe it's a bug? I can't find anyone else having recorded this specific fault so I figured I'd open it here. For now I'm just not using the distribution list but it'd be nice to eventually find the solution. Many thanks, Chris

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • htaccess rewrite rules in Nginx: setting the rewrite path

    - by ct2k7
    I have a htaccess file I'm trying to convert into an nignx config file. Here's my htaccess file. RewriteEngine on RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule !\.(jpg|css|js|gif|png)$ public/ [L] RewriteRule !\.(jpg|css|js|gif|png)$ public/index.php?url=$1 And the rules I have in my nginx config file: location / { if ($request_uri !~ "-f"){ rewrite !\.(jpg|css|js|gif|png)$ public/ break; } rewrite !\.(jpg|css|js|gif|png)$ public/index.php?url=$1; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { # Move to the @missing part when the file doesn't exist try_files $uri @missing; # Fix for server variables that behave differently under nginx/$ fastcgi_split_path_info ^(.+\.php)(/.+)$; # Include the standard fastcgi_params file included with ngingx include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_index index.php; # Pass to upstream PHP-FPM; This must match whater you name you$ #fastcgi_pass phpfpm; fastcgi_pass 127.0.0.1:9000; } location @missing { rewrite ^(.*)$ public/index.php?url=$1 break; } However, when I hit /, I get a 403 Forbidden, but I can get to /public/index.php, thus the rewrite isn't working. Any ideas on what I'm doing wrong?

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • adding trac to apache2 configuration file

    - by Michael
    I currently have apache2 running from a mythtv/mythweb install. This made two config files available in sites-enabled. One of them ("default-mythbuntu") has the VirtualHost directive and seems like a normal file (except a change to the directory index). There is also a mythweb.conf file that only has directives and sets various variables for mythweb. I want to host a trac site as well. According to this site: http://trac.edgewall.org/wiki/TracOnUbuntu there are some setting I need to set for the Trac site. They give me directions for making a VirtualHost, but I think I should use the current VirtualHost and just add the directives (I'll need to change the default location they point to from the site above to just point to the trac location). Where should I put these directives? Can I make a Trac.conf with just the settings for Trac and enable it, or do I need to put them in the default-mythbuntu file? I don't like that later because it doesn't separate out the Trac configs. How does Apache know that the mythweb (and the trac.conf I want to make) belong to the virtualhost defined in the default-mythbuntu? It is the only virtualhost that is being defined on my system if that matters.

    Read the article

  • Prevent Roaming profiles from syncing certain elements

    - by user29919
    Hello everyone, I'm somewhat new to the Server 2008 front, and I'm afraid I've hit my first snag: I've set up roaming profiles, and they appear to be working too well. Is there a way to limit, ideally on a folder/object basis, what gets synced with a roaming profile? What I'm trying to do is: 1) stop my roaming profile from syncing desktop layout - I run a dual-screen desktop and a laptop, and it's really annoying to have to reposition everything after logging onto the laptop, because it forces everything onto one screen. 2) stop it from syncing registry variables - specifically, I want Visual Studio to load different setting files on each computer. Currently, the variable that contains that path is getting synced whenever I log in, so I get the settings from whatever box I last logged out from. 3) stop syncing the start menu - this one's not as big, but I'm noticing 'program not found' icons even for programs that are installed. they work when I click them - they just look ugly. I'm running Windows SBS 2008 x64 with two Win7 clients (x86 Pro, and X64 Ultimate). Is there a simple way to do that? Or am I trying to work too much against what roaming profiles are designed for? I could, of course, set up different profiles for the desktop and laptop, but that seems to defeat the point of roaming profiles entirely... Thanks in advance! Any help will be much appreciated =)

    Read the article

  • Cisco VPNClient from Mac won't connect using iPhone Tethering

    - by Dan Short
    I just set up iPhone tethering from my Snow Leopard Macbook Pro to my iPhone 3GS with the Datapro 4GB plan from AT&T. When attempting to connect to my corporate VPN from the MacBook Pro with Cisco VPNClient 4.9.01 (0100) I get the following log information: Cisco Systems VPN Client Version 4.9.01 (0100) Copyright (C) 1998-2006 Cisco Systems, Inc. All Rights Reserved. Client Type(s): Mac OS X Running on: Darwin 10.6.0 Darwin Kernel Version 10.6.0: Wed Nov 10 18:13:17 PST 2010; root:xnu-1504.9.26~3/RELEASE_I386 i386 Config file directory: /etc/opt/cisco-vpnclient 1 13:02:50.791 02/22/2011 Sev=Info/4 CM/0x43100002 Begin connection process 2 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0AD337FF, Src Addr: 0x0AD33702 (DRVIFACE:1158). 3 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0A2581FF, Src Addr: 0x0A258102 (DRVIFACE:1158). 4 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100004 Establish secure connection using Ethernet 5 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100024 Attempt connection with server "209.235.253.115" 6 13:02:50.792 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (500). 7 13:02:50.793 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (4500). 8 13:02:50.793 02/22/2011 Sev=Info/6 IKE/0x4300003B Attempting to establish a connection with 209.235.253.115. 9 13:02:51.293 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 10 13:02:51.894 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 11 13:02:52.495 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 12 13:02:53.096 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 13 13:02:53.698 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 14 13:02:54.299 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. 16 13:02:54.299 02/22/2011 Sev=Warning/2 IKE/0xC300009A Failed to set up connection data 17 13:02:54.299 02/22/2011 Sev=Info/4 CM/0x4310001C Unable to contact server "209.235.253.115" 18 13:02:54.299 02/22/2011 Sev=Info/5 CM/0x43100025 Initializing CVPNDrv 19 13:02:54.300 02/22/2011 Sev=Info/4 CVPND/0x4340001F Privilege Separation: restoring MTU on primary interface. 20 13:02:54.300 02/22/2011 Sev=Info/4 IKE/0x43000001 IKE received signal to terminate VPN connection 21 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700008 IPSec driver successfully started 22 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 23 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000D Key(s) deleted by Interface (192.168.0.171) 24 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 25 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 26 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 27 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000A IPSec driver successfully stopped The key line is 15: 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. I can't find anything online about this. I found a single entry for the error message in Google, and it was a swedish (or some other nordic language site) that didn't have an answer to the question. I've tried connecting through both USB and Bluetooth tethering to the iPhone, and they both return the exact same results. I don't have direct control over the firewall, but if changes are necessary to make it work, I may be able to get the powers-that-be to make adjustments. A solution that doesn't require reconfiguring the firewall would be far better of course... Does anyone know what I can do to make this behave? Thanks, Dan

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Installing vim7.2 on Solaris Sparc 2.6 as non-root

    - by Tobbe
    I'm trying to install vim to $HOME/bin by compiling the sources. ./configure --prefix=$home/bin seems to work, but when running make I get: > make Starting make in the src directory. If there are problems, cd to the src directory and run make there cd src && make first gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -I/usr/openwin/include -o objects/buffer.o buffer.c In file included from buffer.c:28: vim.h:41: error: syntax error before ':' token In file included from os_unix.h:29, from vim.h:245, from buffer.c:28: /usr/include/sys/stat.h:251: error: syntax error before "blksize_t" /usr/include/sys/stat.h:255: error: syntax error before '}' token /usr/include/sys/stat.h:309: error: syntax error before "blksize_t" /usr/include/sys/stat.h:310: error: conflicting types for 'st_blocks' /usr/include/sys/stat.h:252: error: previous declaration of 'st_blocks' was here /usr/include/sys/stat.h:313: error: syntax error before '}' token In file included from /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:132, from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/sys/siginfo.h:259: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:292: error: syntax error before '}' token /usr/include/sys/siginfo.h:294: error: syntax error before '}' token /usr/include/sys/siginfo.h:390: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:398: error: conflicting types for '__fault' /usr/include/sys/siginfo.h:267: error: previous declaration of '__fault' was here /usr/include/sys/siginfo.h:404: error: conflicting types for '__file' /usr/include/sys/siginfo.h:273: error: previous declaration of '__file' was here /usr/include/sys/siginfo.h:420: error: conflicting types for '__prof' /usr/include/sys/siginfo.h:287: error: previous declaration of '__prof' was here /usr/include/sys/siginfo.h:424: error: conflicting types for '__rctl' /usr/include/sys/siginfo.h:291: error: previous declaration of '__rctl' was here /usr/include/sys/siginfo.h:426: error: syntax error before '}' token /usr/include/sys/siginfo.h:428: error: syntax error before '}' token /usr/include/sys/siginfo.h:432: error: syntax error before "k_siginfo_t" /usr/include/sys/siginfo.h:437: error: syntax error before '}' token In file included from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:173: error: syntax error before "siginfo_t" In file included from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/signal.h:111: error: syntax error before "siginfo_t" /usr/include/signal.h:113: error: syntax error before "siginfo_t" buffer.c: In function `buflist_new': buffer.c:1502: error: storage size of 'st' isn't known buffer.c: In function `buflist_findname': buffer.c:1989: error: storage size of 'st' isn't known buffer.c: In function `setfname': buffer.c:2578: error: storage size of 'st' isn't known buffer.c: In function `otherfile_buf': buffer.c:2836: error: storage size of 'st' isn't known buffer.c: In function `buf_setino': buffer.c:2874: error: storage size of 'st' isn't known buffer.c: In function `buf_same_ino': buffer.c:2894: error: dereferencing pointer to incomplete type buffer.c:2895: error: dereferencing pointer to incomplete type *** Error code 1 make: Fatal error: Command failed for target `objects/buffer.o' Current working directory /home/xluntor/vim72/src *** Error code 1 make: Fatal error: Command failed for target `first' How do I fix the make errors? Or is there another way to install vim as non-root? Thanks in advance

    Read the article

  • MYSQL - Multiple set values in one update statement [migrated]

    - by Maurzank
    MYSQL - MULTIPLE SET VALUES IN ONE UPDATE STATEMENT USING 2 TABLES AS REFERENCE AND STORING VALUES IN ONE OF THOSE TABLES WITH A SPECIFIC LOGIC. Hello people, A problem came up by making an UPDATE. The example issue is as follows: CURRENUSRTABLE +------------+-------+ | ID | STATE | +------------+-------+ | 123 | 3 | | 456 | 3 | | 789 | 3 | +------------+-------+ HISTORYTABLE +------------+------------+-----+ | ID | TRDATE | ACT | +------------+------------+-----+ | 123 | 2013-11-01 | 5 | | 456 | 2013-11-01 | 5 | | 789 | 2013-11-01 | 5 | | 123 | 2013-11-02 | 4 | | 456 | 2013-11-02 | 4 | | 789 | 2013-11-02 | 4 | | 123 | 2013-11-03 | 3 | | 456 | 2013-11-03 | 3 | | 789 | 2013-11-03 | 3 | +------------+------------+-----+ I'm using these variables: @BA=3, @DE=5, @BL=4, What I'm trying to do is an update on CURRENUSRTABLE.STATE using HISTORYTABLE.ACT with the following logic: STATE value will be updated as ACT value, except when STATE value is 4 and ACT is 3, then STATE will be 5 I made this statement: UPDATE CURRENUSRTABLE RIGHT OUTER JOIN HISTORYTABLE ON HISTORYTABLE.ID=CURRENUSRTABLE.ID SET CURRENUSRTABLE.STATE= ( SELECT CASE HISTORYTABLE.ACT WHEN @DE THEN @DE WHEN @BL THEN @BL WHEN @BA THEN CASE CURRENUSRTABLE.STATE WHEN @BL THEN @DE ELSE @BA END END ORDER BY HISTORYTABLE.TRDATE,FIELD(HISTORYTABLE.ACT,@DE,@BL,@BA) ) WHERE HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-01' I'm intentionally using "RIGHT OUTER JOIN" and "HISTORYTABLE.TRDATE BETWEEN" because I'd like to change the values in CURRENUSRTABLE using a timeframe of more than one day. If I execute this statement many times using only one day (i.e. "BETWEEN '2013-11-01' AND '2013-11-01'" and then "BETWEEN '2013-11-02' AND '2013-11-02'"... etc ) it works perfectly, but if it is executed using the dates "BETWEEN '2013-11-01' AND '2013-11-03'" the results on CURRENUSRTABLE.STATE are 3, which is wrong, it should be 5. I think the problem relies on "CASE CURRENUSRTABLE.STATE" when uses "HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-03'", because it reads the STATE 9 times which has not been commited yet until the statement ends. Query OK, 9 rows affected (0.00 sec) Rows matched: 9 Changed: 9 Warnings: 0 Maybe the solution is very simple, but unfortunately I've not much practice on MySQL since I've worked with it less than 2 months :) Is there any suggestions to solve this issue? PD: MySQL version is 4.1.22, I know is very old an EOL, unfortunately I have to make these statements on this version. Thanks!

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >