Search Results

Search found 23127 results on 926 pages for 'based'.

Page 873/926 | < Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >

  • Boost MultiIndex - objects or pointers (and how to use them?)?

    - by Sarah
    I'm programming an agent-based simulation and have decided that Boost's MultiIndex is probably the most efficient container for my agents. I'm not a professional programmer, and my background is very spotty. I've two questions: Is it better to have the container contain the agents (of class Host) themselves, or is it more efficient for the container to hold Host *? Hosts will sometimes be deleted from memory (that's my plan, anyway... need to read up on new and delete). Hosts' private variables will get updated occasionally, which I hope to do through the modify function in MultiIndex. There will be no other copies of Hosts in the simulation, i.e., they will not be used in any other containers. If I use pointers to Hosts, how do I set up the key extraction properly? My code below doesn't compile. // main.cpp - ATTEMPTED POINTER VERSION ... #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/tokenizer.hpp> typedef multi_index_container< Host *, indexed_by< // hash by Host::id hashed_unique< BOOST_MULTI_INDEX_MEM_FUN(Host,int,Host::getID) > // arg errors here > // end indexed_by > HostContainer; ... int main() { ... HostContainer testHosts; Host * newHostPtr; newHostPtr = new Host( t, DOB, idCtr, 0, currentEvents ); testHosts.insert( newHostPtr ); ... } I can't find a precisely analogous example in the Boost documentation, and my knowledge of C++ syntax is still very weak. The code does appear to work when I replace all the pointer references with the class objects themselves. As best I can read it, the Boost documentation (see summary table at bottom) implies I should be able to use member functions with pointer elements.

    Read the article

  • OpenVPN: Connection established but can’t connect to server

    - by Maik
    I am trying to set up OpenVPN to allow me to connect a number of laptops to my network in a way that allows the laptops to connect to specific computers via HTTP (to e.g. a server management page) and windows shares (to access files) In the test environment my laptops live in a network with a 192.168.1.X address range. The host-network has a 10.66.77.X address range The server hosting the OpenVPN server has address 10.77.10.20. I need to access some application server web pages on this machine, accessible on various ports The server with the windows shares as well as some other web based pages I need to access is on address 10.66.77.20 The config files for server and laptop are attached below. The laptop establishes the VPN connection without problems, but I cannot access any of the machines, even a simple ping fails. Maybe a routing problem? The routing table for the laptop is shown below as well - every idea is appreciated! Thanks! Maik Server config file port 1194 dev tun tls-server ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/projects.crt key /etc/openvpn/keys/projects.key dh /etc/openvpn/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "route 10.66.77.0 255.255.255.0" keepalive 10 60 inactive 600 route 10.8.0.1 255.255.255.0 user openvpn group openvpn persist-tun persist-key verb 4 client config file dev tun proto udp remote SERVERADDR 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert accountingLaptop.crt key accountingLaptop.key ns-cert-type server comp-lzo verb 3 Resulting routing table on client laptop C:\Documents and Settings\User>route print =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 23 5a 9b 64 9b ...... Atheros AR8132 PCI-E Fast Ethernet Controller - Packet Scheduler Miniport 0x3 ...00 24 2c 35 c9 6b ...... Dell Wireless 1395 WLAN Mini-Card - Packet Sched uler Miniport 0x4 ...00 ff 5e 03 43 9b ...... TAP-Win32 Adapter V9 - Packet Scheduler Miniport =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.129 25 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 1 10.8.0.4 255.255.255.252 10.8.0.6 10.8.0.6 30 10.8.0.6 255.255.255.255 127.0.0.1 127.0.0.1 30 10.66.77.0 255.255.255.0 10.8.0.5 10.8.0.6 1 10.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 30 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.1.0 255.255.255.0 192.168.1.129 192.168.1.129 25 192.168.1.129 255.255.255.255 127.0.0.1 127.0.0.1 25 192.168.1.255 255.255.255.255 192.168.1.129 192.168.1.129 25 224.0.0.0 240.0.0.0 10.8.0.6 10.8.0.6 30 224.0.0.0 240.0.0.0 192.168.1.129 192.168.1.129 25 255.255.255.255 255.255.255.255 10.8.0.6 2 1 255.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 1 255.255.255.255 255.255.255.255 192.168.1.129 192.168.1.129 1 Default Gateway: 192.168.1.1 =========================================================================== Persistent Routes: None

    Read the article

  • Long pause when accessing DFS namespace

    - by Matt
    We've recently migrated our Windows network to use DFS for shared files. DFS is working well, except for one annoying problem: users experience a significant delay when they try to access a DFS namespace that they have not accessed for some time. I have tried to troubleshoot the issue but have not had any success so far, and I was hoping someone here may have some pointers to help resolve the problem. Firstly, some background on our network: The network uses a Windows 2008 functional level Active Directory domain with two Windows 2008 DCs and two DNS servers (one on each of the DCs). The network is DNS only - no WINS. All computers are located at the same site and connected by Gigabit Ethernet. We have approximately 20 Domain-based DFS namespaces in Windows 2008 mode, and each DFS namespace has two Windows 2008 DFS namespace servers (the same two servers for all namespaces). All namespace servers are in FQDN mode and all folder targets are specified using their FQDN. All computers are up-to-date with Service Packs and patches. The actual folder targets (i.e. the SMB shares our DFS folders point to) are scattered across several file and application servers, all running Windows 2008 bar two application servers which run Windows 2003 R2, with no replication setup at all (e.g. all DFS folders currently only have one folder target). Some more detail on the problem: The namespace access delay is generally 1 - 10 seconds long and seems to occur when a particular computer has not accessed the requested namespace for approximately five minutes or more. For example, if the user has not accessed \\domain.name\namespace1\ for more than five minutes and attempts to access \\domain.name\namespace1\ via Windows Explorer, the Explorer window will freeze for 1 - 10 seconds before finally resuming and displaying the folders that exist in \\domain.name\namespace1. If they then close the Explorer window and attempt to access \\domain.name\namespace1\ again within five minutes the contents will be displayed almost instantly - if they wait longer than five minutes it will go through the 1 - 10 second pause again. Once "inside" the namespace everything is nice and snappy, it's just the initial connection to the namespace that is slow. The browsing delays seem to affect all variants of Windows that we use (Windows 2008 x64 SP2, Windows 2003 R2 x86 SP2, Windows XP Pro x86 SP3) - it is possibly a bit worse in Windows XP / 2003 than in Windows 2008, but I'm not sure if the difference isn't just psychological. Accessing the underlying folder targets directly exhibits no delay at all - i.e. if the SMB shares pointed to by DFS are accessed directly (bypassing DFS) then there is no pause. During trouble-shooting I noticed that the "Cache duration" for all of our DFS roots is set to 300 seconds - 5 minutes. Given that this is the same amount of time required to trigger the pause I assume that this caching is somehow related, although I am unsure exactly what is cached on the client and hence what needs to be looked up again after 5 minutes have elapsed. In trying to resolve the problem I have already tried / checked the following (without success): Run dcdiag on both Domain Controllers - no problems found Done some basic DNS server checks without finding any problems - I don't know how to check the DNS servers in detail, but I would add that the network is not exhibiting any other strange behavior that may point to a DNS problem Disabled Anti-virus on clients and servers Removing one of the namespace servers from a couple of namespaces - no difference So that's where I'm up to - and I'm out of ideas. Can anyone suggest what may be causing the delays and/or what I should be trying next?

    Read the article

  • How can I automatically update Flash Player whenever a new version is released? [closed]

    - by user219950
    Summary: Flash Player Update Service doesn't run on a reliable schedule, and doesn't automatically download and apply updates when it does run. Given the importance of having an up-to-date version of Flash Player installed (for those of us who don't use Chrome with its built-in player), I would like to find a way to ensure that new updates are promptly detected and installed. What follows are the details of my efforts to solve this problem on my own... Appendix A: Flash Player Update Service OK, way back in Flash Player 11.2 (or so?) Adobe added the Flash Player Update Service (FlashPlayerUpdateService.exe), it was supposed to keep the Flash Player updated... Upon installation, FPUS is configured to run as a Windows Service, with Start Type set to Manual. A Scheduled Task (Adobe Flash Player Updater.job) is added to start this service every hour. So far, so good - this set-up avoids having a constantly-running service, but makes sure that the checks are run often enough to catch any updates quickly. Google's software updater is configured in a similar fashion, and that works just fine... ...And yet, when I checked the version of my installed Flash Player, I found it was 11.6.602.180, which, based on looking at the timestamps of the files in C:\Windows\System32\Macromed\Flash was last updated (or installed) on Tue, Mar 12, 2013 --- 3/12/13, 5:00:08pm. I made this observation on Thu, Apr 25, 2013 --- 4/25/13, 7:00:00pm, and upon checking Adobe's website found that the current version of Flash Player was 11.7.700.169. That's over a month since the last update, with a new one clearly available on the website but with no indication that the hourly check running on my machine has noticed it or has any intention of downloading it. Appendix B: running the Flash Player updater manually Once upon a time, running FlashUtil32_<version_Plugin.exe -update plugin would give you a window with an Install button; pressing it would download the installer for the current version (automatically, without opening a browser) and run it, then you'd click thru that installer & be done. It was manual, but it worked! Finding my current installation out of date (see Appendix A), I first tried this manual update process. However... Running FlashUtil32_<version_ActiveX.exe -update activex (in my case, that's FlashUtil32_11_6_602_180_ActiveX.exe -update activex) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/activex. Running FlashUtil32_<version_Plugin.exe -update plugin (in my case, that's FlashUtil32_11_6_602_180_Plugin.exe -update plugin) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/plugin. I could continue with the Download page it sent me to, uncheck the foistware box ("Free! McAfee Security Scan Plus"), download that installer (ActiveX, no foistware: install_flashplayer11x32axau_mssd_aih.exe, Plugin, no foistware: install_flashplayer11x32au_mssd_aih.exe) & probably have an updated Flash...but then, what is the point of the Flash Player Update Service if I have to manually download & run another exe? Epilogue I've since come to suspect that the update service is intentionally hobbled to drive early adopters to the manual download page. If this is true, there's probably no solution to this short of writing my own updater; hopefully I am wrong.

    Read the article

  • Magento installation problem on Nginx in Windows

    - by Nithin
    I am trying to install magento locally using nginx as the web server instead of Apache. I copied the magento folder to the html directory. When i try to call the magento folder, I get the 404 not found error. I am able to access other php files setup in the html folder and have PHP installed. Here is my config file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; allow all; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME c:/nginx/html/$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } How do I fix this? This is what I found in the error.log file : 2011/09/06 12:22:35 [error] 5632#0: *1 "/cygdrive/c/nginx/html/magento/index.php/install/index.html" is not found (20: Not a directory), client: 127.0.0.1, server: localhost, request: "GET /magento/index.php/install/ HTTP/1.1", host: "localhost:8080"

    Read the article

  • DCOM Authentication Fails to use Kerberos, Falls back to NTLM

    - by Asa Yeamans
    I have a webservice that is written in Classic ASP. In this web service it attempts to create a VirtualServer.Application object on another server via DCOM. This fails with Permission Denied. However I have another component instantiated in this same webservice on the same remote server, that is created without problems. This component is a custom-in house component. The webservice is called from a standalone EXE program that calls it via WinHTTP. It has been verified that WinHTTP is authenticating with Kerberos to the webservice successfully. The user authenticated to the webservice is the Administrator user. The EXE to webservice authentication step is successful and with kerberos. I have verified the DCOM permissions on the remote computer with DCOMCNFG. The default limits allow administrators both local and remote activation, both local and remote access, and both local and remote launch. The default component permissions allow the same. This has been verified. The individual component permissions for the working component are set to defaults. The individual component permissions for the VirtualServer.Application component are also set to defaults. Based upon these settings, the webservice should be able to instantiate and access the components on the remote computer. Setting up a Wireshark trace while running both tests, one with the working component and one with the VirtualServer.Application component reveals an intresting behavior. When the webservice is instantiating the working, custom, component, I can see the request on the wire to the RPCSS endpoint mapper first perform the TCP connect sequence. Then I see it perform the bind request with the appropriate security package, in this case kerberos. After it obtains the endpoint for the working DCOM component, it connects to the DCOM endpoint authenticating again via Kerberos, and it successfully is able to instantiate and communicate. On the failing VirtualServer.Application component, I again see the bind request with kerberos go to the RPCC endpoing mapper successfully. However, when it then attempts to connect to the endpoint in the Virtual Server process, it fails to connect because it only attempts to authenticate with NTLM, which ultimately fails, because the webservice does not have access to the credentials to perform the NTLM hash. Why is it attempting to authenticate via NTLM? Additional Information: Both components run on the same server via DCOM Both components run as Local System on the server Both components are Win32 Service components Both components have the exact same launch/access/activation DCOM permissions Both Win32 Services are set to run as Local System The permission denied is not a permissions issue as far as I can tell, it is an authentication issue. Permission is denied because NTLM authentication is used with a NULL username instead of Kerberos Delegation Constrained delegation is setup on the server hosting the webservice. The server hosting the webservice is allowed to delegate to rpcss/dcom-server-name The server hosting the webservice is allowed to delegate to vssvc/dcom-server-name The dcom server is allowed to delegate to rpcss/webservice-server The SPN's registered on the dcom server include rpcss/dcom-server-name and vssvc/dcom-server-name as well as the HOST/dcom-server-name related SPNs The SPN's registered on the webservice-server include rpcss/webservice-server and the HOST/webservice-server related SPNs Anybody have any Ideas why the attempt to create a VirtualServer.Application object on a remote server is falling back to NTLM authentication causing it to fail and get permission denied? Additional information: When the following code is run in the context of the webservice, directly via a testing-only, just-developed COM component, it fails on the specified line with Access Denied. COSERVERINFO csi; csi.dwReserved1=0; csi.pwszName=L"terahnee.rivin.net"; csi.pAuthInfo=NULL; csi.dwReserved2=NULL; hr=CoGetClassObject(CLSID_VirtualServer, CLSCTX_ALL, &csi, IID_IClassFactory, (void **) &pClsFact); if(FAILED( hr )) goto error1; // Fails here with HRESULT_FROM_WIN32(ERROR_ACCESS_DENIED) hr=pClsFact->CreateInstance(NULL, IID_IUnknown, (void **) &pUnk); if(FAILED( hr )) goto error2; Ive also noticed that in the Wireshark Traces, i see the attempt to connect to the service process component only requests NTLMSSP authentication, it doesnt even attmept to use kerberos. This suggests that for some reason the webservice thinks it cant use kerberos...

    Read the article

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • Managing access to multiple linux system

    - by Swartz
    A searched for answers but have found nothing on here... Long story short: a non-profit organization is in dire need of modernizing its infrastructure. First thing is to find an alternatives to managing user accounts on a number of Linux hosts. We have 12 servers (both physical and virtual) and about 50 workstations. We have 500 potential users for these systems. The individual who built and maintained the systems over the years has retired. He wrote his own scripts to manage it all. It still works. No complaints there. However, a lot of the stuff is very manual and error-prone. Code is messy and after updates often needs to be tweaked. Worst part is there is little to no docs written. There are just a few ReadMe's and random notes which may or may not be relevant anymore. So maintenance has become a difficult task. Currently accounts are managed via /etc/passwd on each system. Updates are distributed via cron scripts to correct systems as accounts are added on the "main" server. Some users have to have access to all systems (like a sysadmin account), others need access to shared servers, while others may need access to workstations or only a subset of those. Is there a tool that can help us manage accounts that meets the following requirements? Preferably open source (i.e. free as budget is VERY limited) mainstream (i.e. maintained) preferably has LDAP integration or could be made to interface with LDAP or AD service for user authentication (will be needed in the near future to integrate accounts with other offices) user management (adding, expiring, removing, lockout, etc) allows to manage what systems (or group of systems) each user has access to - not all users are allowed on all systems support for user accounts that could have different homedirs and mounts available depending on what system they are logged into. For example sysadmin logged into "main" server has main://home/sysadmin/ as homedir and has all shared mounts sysadmin logged into staff workstations would have nas://user/s/sysadmin as homedir(different from above) and potentially limited set of mounts, a logged in client would have his/her homedir at different location and no shared mounts. If there is an easy management interface that would be awesome. And if this tool is cross-platform (Linux / MacOS / *nix), that will be a miracle! I have searched the web and so have found nothing suitable. We are open to any suggestions. Thank you. EDIT: This question has been incorrectly marked as a duplicate. The linked to answer only talks about having same homedirs on all systems, whereas we need to have different homedirs based on what system user is currently logged into(MULTIPLE homedirs). Also access needs to be granted only to some machinees not the whole lot. Mods, please understand the full extent of the problem instead of merely marking it as duplicate for points...

    Read the article

  • httpd.conf configuration - for internal/external access

    - by tom smith
    hey. after a lot of trail/error/research, i've decided to post here in the hopes that i can get clarification on what i've screwed up... i've got a situation where i have multiple servers behind a router/firewall. i want to be able to access the sites i have from an internal and external url/address, and get the same site. i have to use portforwarding on the router, so i need to be able to use proxyreverse to redirect the user to the approriate server, running the apache/web app... my setup the external urls joomla.gotdns.com forge.gotdns.com both of these point to my router's external ip address (67.168.2.2) (not really) the router forwards port 80 to my server lserver6 192.168.1.56 lserver6 - 192.168.1.56 lserver9 - 192.168.1.59 lserver6 - joomla app lserver9 - forge app i want to be able to have the httpd process (httpd.conf) configured on lserver6 to be able to allow external users accessing the system (foo.gotdns.com) be able to access the joomla app on lserver6 and the same for the forge app running on lserver9 at the same time, i would also like to be able to access the apps from the internal servers, so i'd need to be able to somehow configure the vhost setup/proxyreverse setup to handle the internal access... i've tried setting up multiple vhosts with no luck.. i've looked at the different examples online.. so there must be something subtle that i'm missing... the section of my httpd.conf file that deals with the vhost is below... if there's something else that's needed, let me know and i can post it as well.. thanks -tom ##joomla - file /etc/httpd/conf.d/joomla.conf Alias /joomla /var/www/html/joomla <Directory /var/www/html/joomla> </Directory> # Use name-based virtual hosting. #NameVirtualHost *:80 # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> NameVirtualHost 192.168.1.56:80 <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] #DocumentRoot /var/www/html #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName fforge.gotdns.com:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off ProxyPass / http://192.168.1.81:80/ ProxyPassReverse / http://192.168.1.81:80/ </VirtualHost> <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] DocumentRoot /var/www/html/joomla #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName 192.168.1.56:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off </VirtualHost>

    Read the article

  • Can't launch glassfish on ec2 - can't open port

    - by orange80
    I'm trying to start glassfish on an EBS-based AMI of Ubuntu 10.04 64-bit. I have used glassfish on non-ec2 servers with no problems, but on ec2 I get this message: $ sudo -u glassfish bin/asadmin start-domain domain1 There is a process already using the admin port 4848 -- it probably is another instance of a GlassFish server. Command start-domain failed. I know that ec2 has requires that firewall rules be modified using ec2-authorize to let outside traffic thru the firewall, as I had to do to make ssh work. This still doesn't explain the port error when all I'm trying to do is start glassfish so I can try $ wget localhost:8080and make sure it's working. This is very frustrating and I'd really appreciate any help. Thanks. FINAL UPDATE: Sorry if you came here looking for answers. I never figured out what was causing the problem. I created another fresh instance, installed the same stuff, and Glassfish worked perfectly. Something obviously got boned during installation, but I have no idea what. I guess it will remain a mystery. UPDATE: Here's what I get from netstat: # netstat -nuptl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 462/sshd tcp6 0 0 :::22 :::* LISTEN 462/sshd udp 0 0 0.0.0.0:5353 0.0.0.0:* 483/avahi-daemon: r udp 0 0 0.0.0.0:1194 0.0.0.0:* 589/openvpn udp 0 0 0.0.0.0:37940 0.0.0.0:* 483/avahi-daemon: r udp 0 0 0.0.0.0:68 0.0.0.0:* 377/dhclient3 UPDATE: One more thing... I know that the "net.ipv6.bindv6only" kernel option can cause problems with java networking, so I did set this: # sysctl -w net.ipv6.bindv6only=0 UPDATE: I also verified that it has nothing at all to do with the port number (4848). As you can see here, when I changed the admin-listener port in domain.xml to 4949, I get a similar message: # sudo -u glassfish bin/asadmin start-domain domain1 There is a process already using the admin port 4949 -- it probably is another instance of a GlassFish server. Command start-domain failed. UPDATE: Here are the contents of /etc/hosts: 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts I should mention that I have another Ubuntu Lucid 10.04 64-bit slice that is NOT hosted on ec2, and set it up the exact same way with no problems whatsoever. Also server.log doesn't offer much insight either: # cat ./server.log Nov 20, 2010 8:46:49 AM com.sun.enterprise.admin.launcher.GFLauncherLogger info INFO: JVM invocation command line: /usr/lib/jvm/java-6-sun-1.6.0.22/bin/java -cp /opt/glassfishv3/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=192m -XX:NewRatio=2 -XX:+LogVMOutput -XX:LogFile=/opt/glassfishv3/glassfish/domains/domain1/logs/jvm.log -Xmx512m -client -javaagent:/opt/glassfishv3/glassfish/lib/monitor/btrace-agent.jar=unsafe=true,noServer=true -Dosgi.shell.telnet.maxconn=1 -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -Dfelix.fileinstall.dir=/opt/glassfishv3/glassfish/modules/autostart/ -Djavax.net.ssl.keyStore=/opt/glassfishv3/glassfish/domains/domain1/config/keystore.jks -Dosgi.shell.telnet.port=6666 -Djava.security.policy=/opt/glassfishv3/glassfish/domains/domain1/config/server.policy -Dfelix.fileinstall.poll=5000 -Dcom.sun.aas.instanceRoot=/opt/glassfishv3/glassfish/domains/domain1 -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dosgi.shell.telnet.ip=127.0.0.1 -Djava.endorsed.dirs=/opt/glassfishv3/glassfish/modules/endorsed:/opt/glassfishv3/glassfish/lib/endorsed -Dcom.sun.aas.installRoot=/opt/glassfishv3/glassfish -Djava.ext.dirs=/usr/lib/jvm/java-6-sun-1.6.0.22/lib/ext:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/ext:/opt/glassfishv3/glassfish/domains/domain1/lib/ext -Dfelix.fileinstall.bundles.new.start=true -Djavax.net.ssl.trustStore=/opt/glassfishv3/glassfish/domains/domain1/config/cacerts.jks -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Djava.security.auth.login.config=/opt/glassfishv3/glassfish/domains/domain1/config/login.conf -DANTLR_USE_DIRECT_CLASS_LOADING=true -Dfelix.fileinstall.debug=1 -Dorg.glassfish.web.rfc2109_cookie_names_enforced=false -Djava.library.path=/opt/glassfishv3/glassfish/lib:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/amd64/server:/usr/lib/jvm/java-6-sun-1.6.0.22/jre/lib/amd64:/usr/lib/jvm/java-6-sun-1.6.0.22/lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib com.sun.enterprise.glassfish.bootstrap.ASMain -domainname domain1 -asadmin-args start-domain,,,domain1 -instancename server -verbose false -debug false -asadmin-classpath /opt/glassfishv3/glassfish/modules/admin-cli.jar -asadmin-classname com.sun.enterprise.admin.cli.AsadminMain -upgrade false -domaindir /opt/glassfishv3/glassfish/domains/domain1 -read-stdin true

    Read the article

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • Deploying play! 2.0 application on an apache server with a reverse proxy

    - by locrizak
    I'm trying to deploy my play! 2.0 application on an Ubuntu 11.10 server and I have been running into error after error and hope someone can help me here. I am try to deploy my Play! application using a reverse proxy on Apache 2. I have enabled the apache proxy modules and configured the proxy.conf file in mods_enabled. The vhost for my domain looks like this: <Directory /var/www/stage.domain.com AllowOverride None Order Deny,Allow Deny from all </Directory <VirtualHost *:80 DocumentRoot /var/www/stage.domain.com/web ServerName stage.domain.com ServerAdmin [email protected] # ProxyRequests Off # ProxyPreserveHost On <Proxy * Order allow,deny Allow from all </Proxy # ProxyVia On # ProxyPass /play/ http://localhost:9000/ # ProxyPassReverse /play/ http://localhost:9000/ ErrorLog /var/log/ispconfig/httpd/stage.domain.com/error.log ErrorDocument 400 /error/400.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 405 /error/405.html ErrorDocument 500 /error/500.html ErrorDocument 502 /error/502.html ErrorDocument 503 /error/503.html <IfModule mod_ssl.c </IfModule <Directory /var/www/stage.domain.com/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory <Directory /var/www/clients/client2/web7/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory # Clear PHP settings of this website <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch # mod_php enabled AddType application/x-httpd-php .php .php3 .php4 .php5 php_admin_value sendmail_path "/usr/sbin/sendmail -t -i [email protected]" php_admin_value upload_tmp_dir /var/www/clients/client2/web7/tmp php_admin_value session.save_path /var/www/clients/client2/web7/tmp # PHPIniDir /var/www/conf/web7 php_admin_value open_basedir /var/www/clients/client2/web7/:/var/www/clients/client2/web7/web:/va$ # add support for apache mpm_itk <IfModule mpm_itk_module AssignUserId web7 client2 </IfModule <IfModule mod_dav_fs.c # Do not execute PHP files in webdav directory <Directory /var/www/clients/client2/web7/webdav <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch </Directory # DO NOT REMOVE THE COMMENTS! # IF YOU REMOVE THEM, WEBDAV WILL NOT WORK ANYMORE! # WEBDAV BEGIN # WEBDAV END </IfModule # <Location /play/ # ProxyPass http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 # </Location ProxyRequests Off ProxyPass /play/ http://localhost:9000/ ProxyPassReverse /play/ localhost:9000/ ProxyPass /play http://localhost:9000/ ProxyPassReverse /play http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 </VirtualHost This vhost file was generated by ispconfig and I have not touched anything that was there before just added onto. As you can see by the commented out parts I have tried a lot of different things based on random tutorials I have found but all of them have ended up in Internal Server Error, 503 and most often a '502 Bad Gateway`. I can start play and it does connect successfully to my database. I can get a page to show up when there is an error and the play! stack trace error pages comes up but where everything is fine I get one of the errors above. My application.conf file looks like this: db info ....... application.mode=PROD logger.root=ERROR # Logger used by the framework: logger.play=INFO # Logger provided to your application: logger.application=DEBUG http.path="/play/" XForwardedSupport="127.0.0.1" And my hosts file looks like this (I have never changed or added anything to the host file): 127.0.0.1 localhost 127.0.1.1 matrix # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Any insights onto what I might be doing wrong or if theres anything I can try please let me know! Thanks!! Edit Again the reverse proxy will work (I checked with sending to to google.com). Its when there is a successful connection to Netty. It's like Netty refuses the connection to the page. Edit 2 output from apachectl -S _default_:8081 127.0.0.1 (/etc/apache2/sites-enabled/000-apps.vhost:10) *:8090 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) port 8090 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) *:80 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost stage.domain.com (/etc/apache2/sites-enabled/100-stage.domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)

    Read the article

  • Cannot ping host stale ARP cache?

    - by gkchicago
    I am having a strange issue with a Debian (Lenny/Linux 2.6.26-2-amd64) that has been driving me nuts. On some machines within my network I can ping the host in question just fine, other times I have to manually hard-code the ARP ethernet address for the IP in order to establish connectivity. I've finally worked it down to somehow involving ARP. I just found how to fix it in a way that made it work but I'm looking for help explaining this issue and also I don't trust my fix to be permanent.. My thought process has been the following but I just can't make any sense out of it: Could it be the card? (Intel 82555 rev 4) Could it be because there are two network cards? (Default route is eth0) Could it be because of the network aliases? Lenny? AMD x86_64? Argh.. Thank you for any insight you might have // Ping doesn't go thru [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. --- 192.168.135.101 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3014ms // Here's the ARP Table, sometimes the .151 address is good, sometimes it // also matches the Gateways MAC like .101 is doing right here. [gordon@ubuntu ~]$ cat /proc/net/arp IP address HW type Flags HW address Mask Device 192.168.135.15 0x1 0x2 00:0B:DB:2B:24:89 * eth0 192.168.135.151 0x1 0x2 00:0B:6A:3A:30:A6 * eth0 192.168.135.1 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 192.168.135.101 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 // Drop the bad arp table listing and set it manually based on /sbin/ifconfig [gordon@ubuntu ~]$ sudo arp -d 192.168.135.101 [gordon@ubuntu ~]$ sudo arp -s 192.168.135.101 00:0B:6A:3A:30:A6 // Ping starts going thru..?!? [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. 64 bytes from 192.168.135.101: icmp_seq=1 ttl=64 time=15.8 ms 64 bytes from 192.168.135.101: icmp_seq=2 ttl=64 time=15.9 ms 64 bytes from 192.168.135.101: icmp_seq=3 ttl=64 time=16.0 ms 64 bytes from 192.168.135.101: icmp_seq=4 ttl=64 time=15.9 ms --- 192.168.135.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3012ms rtt min/avg/max/mdev = 15.836/15.943/16.064/0.121 ms The following is my network config on this. gordon@db01:~$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.151 Bcast:192.168.135.255 Mask:255.255.255.0 inet6 addr: fe80::20b:6aff:fe3a:30a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15476725 errors:0 dropped:0 overruns:0 frame:0 TX packets:10030036 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18565307359 (17.2 GiB) TX bytes:3412098075 (3.1 GiB) eth0:0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.150 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:1 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.101 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:e0:81:2a:6e:d0 inet addr:10.10.62.1 Bcast:10.10.62.255 Mask:255.255.255.0 inet6 addr: fe80::2e0:81ff:fe2a:6ed0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10233315 errors:0 dropped:0 overruns:0 frame:0 TX packets:19400286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1112500658 (1.0 GiB) TX bytes:27952809020 (26.0 GiB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:387 errors:0 dropped:0 overruns:0 frame:0 TX packets:387 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:41314 (40.3 KiB) TX bytes:41314 (40.3 KiB) gordon@db01:~$ sudo mii-tool -v eth0 eth0: negotiated 100baseTx-FD, link ok product info: Intel 82555 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD gordon@db01:~$ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface localnet * 255.255.255.0 U 0 0 0 eth0 10.10.62.0 * 255.255.255.0 U 0 0 0 eth1 default 192.168.135.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • SSH-forwarded X11 display from Linux to Mac lost after some time

    - by mklein9
    I have a new and vexing problem with ssh forwarding my X11 connection when logging in from a Mac (10.7.2) to Linux (Ubuntu 8.04). I have no trouble using ssh -X to log in to the remote machine and starting an X11-based application from that shell. What has recently started happening is that additional invocations of X11 applications from that same shell, after a while (on the order of hours), are unable to start because the forwarded display is being blocked (I presume). When attempting to start xterm, for example, I get the usual message about a bad DISPLAY setting, such as: xterm Xt error: Can't open display: localhost:10.0 But the X11 application I started right when I logged in is still running along just fine, using that exact same display (localhost:10.0), just that it was started earlier. I turned on verbose logging in sshd_config and I see this in the /var/log/auth.log file in response to the failed xterm startup attempt: sshd[22104]: channel 8: open failed: administratively prohibited: open failed If I ssh -X to the server again, starting a new shell and getting assigned a new display (localhost:11.0), the same process repeats: the X11 applications started early on run just fine for as long as I keep them open (days), but after a few hours I cannot start any new ones from that shell. Particulars: OpenSSH sshd server running on Ubuntu 8.04, display forwarded to a Mac running Lion (10.7.2) with the default Apple X server. The systems are connected on an Ethernet LAN with a single switch between them. Neither machine is running a firewall. Until recently (a few days ago) this setup worked perfectly so I am mystified as to where to look next. I am by no means an X11 or SSH expert but have good UNIX/Linux experience. Nothing obvious has changed in either client or server configuration although I have tried changing a few options to try to debug this, like setting sshd_config's TCPKeepAlive to no, and setting "host +localhost" (you can tell I've been Googling). When logging in from a Linux 11.10 laptop to the same remote host over the same network and switch, this problem does not occur -- an xterm can be invoked successfully hours later from the same ssh login shell while the same experiment from the Mac fails (tested this morning to be sure), so it would appear to be a Mac-specific issue. With "LogLevel DEBUG3" set on the remote machine (sshd server), and no change made in the client connections by me, /var/log/auth.log shows one slight change in connection status reports overnight, which is the port number used by the one successful ssh session from the Linux machine (I think), connection #7 below: sshd[20173]: debug3: channel 7: status: The following connections are open:\r\n #0 server-session (t4 r0 i0/0 o0/0 fd 14/13 cfd -1)\r\n #3 X11 connection from 127.0.0.1 port 57564 (t4 r1 i0/0 o0/0 fd 16/16 cfd -1)\r\n #4 X11 connection from 127.0.0.1 port 57565 (t4 r2 i0/0 o0/0 fd 17/17 cfd -1)\r\n #5 X11 connection from 127.0.0.1 port 57566 (t4 r3 i0/0 o0/0 fd 18/18 cfd -1)\r\n #6 X11 connection from 127.0.0.1 port 57567 (t4 r4 i0/0 o0/0 fd 19/19 cfd -1)\r\n #7 X11 connection from 127.0.0.1 port 59007 In this report, everything is the same between status reports except the port number used by connection #7 which I believe is the Linux client -- the only one still maintaining a display connection. It continues to increment over time, judging by a sequence of these reports overnight. Thanks for any help, -Mike

    Read the article

  • Where's the Swap File/Partition?

    - by chrisbunney
    I'm investigating the virtual memory configuration of a Debian based Amazon EC2 instance, and as my background isn't in system admin, I'm slightly confused by what I'm seeing. We're using MongoDB, and the monitoring server we have indicates that the Mongo process is using about 20GB of swap space, however I can't figure out where this is located on the server. As far as I can tell from using the various suggested methods from Google, there is either a much smaller amount, or none at all. top indicates that there is 1.8GB of swap memory: top - 15:35:21 up 6 days, 3:23, 1 user, load average: 1.60, 1.43, 1.37 Tasks: 47 total, 2 running, 45 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 1.3%sy, 0.0%ni, 14.7%id, 83.8%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 3928924k total, 2855572k used, 1073352k free, 640564k buffers Swap: 0k total, 0k used, 0k free, 1887788k cached swapon -s doesn't seem to think there's any swap space: Filename Type Size Used Priority free -m doesn't think there's any swap either: total used free shared buffers cached Mem: 3836 3663 172 0 626 2701 -/+ buffers/cache: 336 3500 Swap: 0 0 0 And neither does vmstat: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 3 0 66224 641372 2874744 0 0 21 5012 21 33 2 2 76 19 But cat /etc/fstab thinks there is a swap partition: /dev/xvda1 / ext3 defaults 1 1 /dev/xvda2 /mnt ext3 defaults 0 0 /dev/xvda3 swap swap defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 However df -k gives no indication of the xvda3 partition: Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 16513960 15675324 0 100% / tmpfs 1964460 8 1964452 1% /lib/init/rw udev 1914148 28 1914120 1% /dev tmpfs 1964460 4 1964456 1% /dev/shm So I really don't know what to make of this, because I appear to have a process using about 10 times more virtual memory than what might be available, and I have no idea where this virtual memory is on the system. I'm probably misinterpreting the output of the tools, so I'd be grateful if someone would be able to set me straight: What have I got wrong, what's the right interpretation, and how do you reach that interpretation? EDIT0: We use 10gen's MMS for monitoring the database, the relevant section for memory from the last data point is: "mem": { "virtual": 20749, "bits": 64, "supported": true, "mappedWithJournal": 20376, "mapped": 10188, "resident": 1219 }, This JSON is specific to the database process (I believe) rather than the system as a whole. fdisk -l /dev/xvda outputs... nothing? I tried each of the 3 xvda entries in /etc/fstab as well: root@ip:~# fdisk -l /dev/xvda1 Disk /dev/xvda1: 34.4 GB, 34359738368 bytes 255 heads, 63 sectors/track, 4177 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table root@ip:~# fdisk -l /dev/xvda2 root@ip:~# fdisk -l /dev/xvda3 root@ip:~# Edit1: Output of cat /proc/meminfo for the sake of completeness: MemTotal: 3928924 kB MemFree: 726600 kB Buffers: 648368 kB Cached: 2216556 kB SwapCached: 0 kB Active: 1945100 kB Inactive: 994016 kB Active(anon): 60476 kB Inactive(anon): 12952 kB Active(file): 1884624 kB Inactive(file): 981064 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 387180 kB Writeback: 0 kB AnonPages: 73380 kB Mapped: 1188260 kB Shmem: 48 kB Slab: 149768 kB SReclaimable: 146076 kB SUnreclaim: 3692 kB KernelStack: 1104 kB PageTables: 16096 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1964460 kB Committed_AS: 305572 kB VmallocTotal: 34359738367 kB VmallocUsed: 16760 kB VmallocChunk: 34359721448 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 3932160 kB DirectMap2M: 0 kB

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. ** Of interest** I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list ‘\Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • after enabling mod ssl apache stops listening on port 80

    - by zensys
    I have an ubuntu 12.04 server with zend server CE installed. I now wanted to enable https but after the first steps according to the documentation, 'a2enmod ssl' and 'apache service restart', apache does not listen on 443 but neither on 80, according to netstat -tap | grep http(s)! This is what I see in my error log, but I can't make much of it: [Fri May 25 19:52:39 2012] [notice] caught SIGTERM, shutting down [Fri May 25 19:52:41 2012] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Fri May 25 19:52:41 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:52:41 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:52:41 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:52:41 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:52:41 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:11 2012] [notice] ModSecurity for Apache/2.6.3 (http://www.modsecurity.org/) configured. [Fri May 25 19:53:11 2012] [notice] ModSecurity: APR compiled version="1.4.5"; loaded version="1.4.6" [Fri May 25 19:53:11 2012] [warn] ModSecurity: Loaded APR do not match with compiled! [Fri May 25 19:53:11 2012] [notice] ModSecurity: PCRE compiled version="8.12"; loaded version="8.12 2011-01-15" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LUA compiled version="Lua 5.1" [Fri May 25 19:53:11 2012] [notice] ModSecurity: LIBXML compiled version="2.7.8" [Fri May 25 19:53:12 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.3.8-ZS5.5.0 configured -- resuming normal operations and here is my httpd.conf: # Name based virtual hosting <virtualhost *:80> ServerName www-redirect KeepAlive Off RewriteEngine On RewriteCond %{HTTP_HOST} ^[^\./]+\.[^\./]+$ RewriteRule ^/(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </virtualhost> Alias /shared/js "/home/web/library/js" Alias /shared/image "/home/web/library/image" <IfModule mod_expires.c> <FilesMatch "\.(jpe?g|png|gif|js|css|doc|rtf|xls|pdf)$"> ExpiresActive On ExpiresDefault "access plus 1 week" </FilesMatch> </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> <Location /> RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] </Location> netstat -tap gives: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:mysql *:* LISTEN 765/mysqld tcp 0 0 *:pop3 *:* LISTEN 744/dovecot tcp 0 0 *:imap2 *:* LISTEN 744/dovecot tcp 0 0 *:http *:* LISTEN 19861/apache2 tcp 0 0 *:smtp *:* LISTEN 30365/master tcp 0 0 *:4444 *:* LISTEN 634/sshd tcp 0 0 *:kamanda *:* LISTEN 1167/lighttpd tcp 0 0 *:imaps *:* LISTEN 744/dovecot tcp 0 0 *:amandaidx *:* LISTEN 1167/lighttpd tcp 0 0 localhost.loc:amidxtape *:* LISTEN 19861/apache2 tcp 0 0 *:pop3s *:* LISTEN 744/dovecot tcp 0 384 mail.mysite.:4444 231.214.14.37.dyn:41909 ESTABLISHED 19039/sshd: web [pr tcp 0 0 localhost.localdo:mysql localhost.localdo:48252 ESTABLISHED 765/mysqld tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54686 TIME_WAIT - tcp 0 0 mail.mysite.:4444 231.214.14.37.dyn:42419 ESTABLISHED 19372/sshd: web [pr tcp 0 0 localhost.localdo:48252 localhost.localdo:mysql ESTABLISHED 19884/auth tcp 0 0 mail.mysite.:http 231.214.14.37.dyn:54685 TIME_WAIT - tcp6 0 0 [::]:pop3 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:imap2 [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:smtp [::]:* LISTEN 30365/master tcp6 0 0 [::]:4444 [::]:* LISTEN 634/sshd tcp6 0 0 [::]:imaps [::]:* LISTEN 744/dovecot tcp6 0 0 [::]:pop3s [::]:* LISTEN 744/dovecot Anyone knows what I am doing wrong? Perhaps I should take some additional steps to make apache listen 0n 443 but that it stops listening on 80 altogether I can't understand.

    Read the article

  • How to serve static files for multiple Django projects via nginx to same domain

    - by thanley
    I am trying to setup my nginx conf so that I can serve the relevant files for my multiple Django projects. Ultimately I want each app to be available at www.example.com/app1, www.example.com/app2 etc. They all serve static files from a 'static-files' directory located in their respective project root. The project structure: Home Ubuntu Web www.example.com ref logs app app1 app1 static bower_components templatetags app1_project templates static-files app2 app2 static templates templatetags app2_project static-files app3 tests templates static-files static app3_project app3 venv When I use the conf below, there are no problems for serving the static-files for the app that I designate in the /static/ location. I can also access the different apps found at their locations. However, I cannot figure out how to serve all of the static files for all the apps at the same time. I have looked into using the 'try_files' command for the static location, but cannot figure out how to see if it is working or not. Nginx Conf - Only serving static files for one app: server { listen 80; server_name example.com; server_name www.example.com; access_log /home/ubuntu/web/www.example.com/logs/access.log; error_log /home/ubuntu/web/www.example.com/logs/error.log; root /home/ubuntu/web/www.example.com/; location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; } location /media/ { alias /home/ubuntu/web/www.example.com/media/; } location /app1/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app1; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app1.sock; } location /app2/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app2; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app2.sock; } location /app3/ { include uwsgi_params; uwsgi_param SCRIPT_NAME /app3; uwsgi_modifier1 30; uwsgi_pass unix:///home/ubuntu/web/www.example.com/app3.sock; } # what to serve if upstream is not available or crashes error_page 400 /static/400.html; error_page 403 /static/403.html; error_page 404 /static/404.html; error_page 500 502 503 504 /static/500.html; # Compression gzip on; gzip_http_version 1.0; gzip_comp_level 5; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Some version of IE 6 don't handle compression well on some mime-types, # so just disable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; # Set a vary header so downstream proxies don't send cached gzipped # content to IE6 gzip_vary on; } Essentially I want to have something like (I know this won't work) location /static/ { alias /home/ubuntu/web/www.example.com/app/app1/static-files/; alias /home/ubuntu/web/www.example.com/app/app2/static-files/; alias /home/ubuntu/web/www.example.com/app/app3/static-files/; } or (where it can serve the static files based on the uri) location /static/ { try_files $uri $uri/ =404; } So basically, if I use try_files like above, is the problem in my project directory structure? Or am I totally off base on this and I need to put each app in a subdomain instead of going this route? Thanks for any suggestions TLDR: I want to go to: www.example.com/APP_NAME_HERE And have nginx serve the static location: /home/ubuntu/web/www.example.com/app/APP_NAME_HERE/static-files/;

    Read the article

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • Multiple Homed Windows 2008 Server / Windows 7 Client

    - by Daniel Scott
    I have a small Windows 2008 network, with some Windows 7 clients. The clients are both laptops with docking stations and I would like them to communicate with the Windows 2008 server (for filesharing) through the wired network whilst they're docked. Internet connectivity for all machines (clients and server) is via a Wireless LAN, so the wireless adapter in the Windows 7 clients stays active while they're docked. When the laptops are un-docked, it would be nice to still be able to contact the windows 2008 server for print sharing (and slower file sharing) - hence the server also being on the wireless LAN. The windows 2008 server is running Active Directory, DHCP and DNS. It controls DHCP leases on the wired network and holds the DNS records for "myserver.mycompany.local", which is what the filesharing clients connect to. Ideally I'd like the DNS records to return the wired IP first so that this is the address that the laptops will attempt initially - but there doesn't seem to be a way to do that? At present the server's IP on the wireless LAN comes out of an nslookup above the wired Lan IP. The multi-homing works perfectly - but in the wrong order! Switch on the wireless lan and ping myserver and it goes to the wireless IP. Disable the wireless on the client and do the same ping again and after a couple of seconds it starts pinging the wired address. Does anyone have any suggestions on how to make this work in a predictable order? - or even if it can work. Alternative 1? If it can't work, then would this work: Remove the wireless adapter from the server, put a wireless router/bridge on the wired network (set up to route to/from the wireless LAN's subnet), then configure the clients with two routes to the (now) single IP of the server with metrics favouring direct communication over the wired LAN first? Alternative 2? Should I instead single-home the laptops so all of their connectivity is via the wired-LAN while they're docked? (and route via the windows 2008 server - or a dedicated wireless bridge/router)? My concern here is that I'd like undocking to be seamless - and if the clients are in the middle of downloading something from the internet I wouldn't want whatever they're doing interupted as they switch IP addresses onto the Wireless network. Perhaps this isn't the case and I'm concerned over nothing? Any thoughts? :) UPDATE I seem to have cracked it (at least DNS entries come out in the order I hope for - and pinging the server with various combinations of wired, wireless and both interfaces enabled uses the IP I want) ... I set the binding order of the NICs on the Server (which is acting as Domain Controller, DHCP and DNS server) so that the Wired NIC is before the Wireless adapter. (Start -- type "Network Interfaces" -- Select "View Network Connections" -- Press Alt to show classic dropdown menus -- Advanced -- Advanced Settings) Now, an nslookup (from the client) of the server's hostname returns the Wired IP first, followed by the Wireless IP. The wired IP now seems to be used whenever it's contactable. Incidentally, the metrics on the wired and wireless routes (on the client) also favour the wired LAN (based on Windows' automatically assigned metrics) - but this was always the case, even when I was having trouble getting the wired IP to be "favoured". I'm not entirely sure if this is coincidence - or if a DNS server running on Windows, handing back IP addresses for itself does actually take the binding order of it's own network interfaces into account? It would be interesting to hear from someone who can confirm or deny that (or confirm that the binding order on the server plays a role for some other reason?)

    Read the article

  • Vagrant (Virtualbox) host-only multiple node networking issue

    - by Lorin Hochstein
    I'm trying to use a multi-VM vagrant environment as a testbed for deploying OpenStack, and I've run into a networking problem with trying to communicate from one VM, to a VM-inside-of-a-VM. I have two Vagrant nodes, a cloud controller node and a compute node. I'm using host-only networking. My Vagrantfile looks like this: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.define :controller do |controller_config| controller_config.vm.network :hostonly, "192.168.206.130" # eth1 controller_config.vm.network :hostonly, "192.168.100.130" # eth2 controller_config.vm.host_name = "controller" end config.vm.define :compute1 do |compute1_config| compute1_config.vm.network :hostonly, "192.168.206.131" # eth1 compute1_config.vm.network :hostonly, "192.168.100.131" # eth2 compute1_config.vm.host_name = "compute1" compute1_config.vm.customize ["modifyvm", :id, "--memory", 1024] end end When I try to start up a (QEMU-based) VM, it boots successfully on compute1, and its virtual nic (vnet0) is connected via a bridge, br100: root@compute1:~# brctl show 100 bridge name bridge id STP enabled interfaces br100 8000.08002798c6ef no eth2 vnet0 When the QEMU VM makes a request to the DHCP server (dnsmasq) running on controller, I can see the request reaches the controller because of the output on the syslog on the controller: Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPDISCOVER(br100) fa:16:3e:07:98:11 Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPOFFER(br100) 192.168.100.2 fa:16:3e:07:98:11 However, the DHCPOFFER never makes it back to the VM running on compute1. If I watch the requests using tcpdump on the vboxnet3 interface on my host machine that runs Vagrant (Mac OS X), I can see both the requests and the replies $ sudo tcpdump -i vboxnet3 -n port 67 or port 68 tcpdump: WARNING: vboxnet3: That device doesn't support promiscuous mode (BIOCPROMISC: Operation not supported on socket) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vboxnet3, link-type EN10MB (Ethernet), capture size 65535 bytes 22:51:20.694040 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.694057 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.696047 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:23.700845 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.700876 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.701591 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:26.705978 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.705995 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.706527 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 But, if I tcpdump on eth2 on compute, I only see the requests, not the replies: root@compute1:~# tcpdump -i eth2 -n port 67 or port 68 tcpdump: WARNING: eth2: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 02:51:20.240672 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:23.249758 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:26.258281 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 At this point, I'm stuck. I'm not sure why the DHCP replies aren't making it to the compute node. Perhaps it has something to do with the configuration of the VirtualBox virtual switch/router? Note that eth2 interfaces on both nodes have been set to promiscuous mode.

    Read the article

  • Unable to connect to OpenVPN server

    - by Incognito
    I'm trying to get a working setup of OpenVPN on my VM and authenticate into it from a client. I'm not sure but it looks to me like it's socket related, as it's not set to LISTEN, and localhost seems wrong. I've never set up VPN before. # netstat -tulpn | grep vpn Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 127.0.0.1:1194 0.0.0.0:* 24059/openvpn I don't think this is set up correctly. Here's some detail into what I've done. I have a VPS from MediaTemple: These are my interfaces before starting openvpn: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:39482 errors:0 dropped:0 overruns:0 frame:0 TX packets:39482 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3237452 (3.2 MB) TX bytes:3237452 (3.2 MB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:4885284 errors:0 dropped:0 overruns:0 frame:0 TX packets:4679884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:835278537 (835.2 MB) TX bytes:1989289617 (1.9 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:205.[redacted] P-t-P:205.186.148.82 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 I've followed this guide on setting up a basic server and getting a .p12 file, however, I was receiving an error that stated /dev/net/tun was missing, so I created it mkdir -p /dev/net mknod /dev/net/tun c 10 200 chmod 600 /dev/net/tun This resolved the error preventing the service from launching, however, I am unable to connect. On the server I've set up the myserver.conf file (as per the tutorial) to indicate local 127.0.0.1 (I've also attempted with the public IP address, perhaps I don't understand what they mean by local IP?). The server launches without error, this is what the log looks like when it starts: Sun Apr 1 17:21:27 2012 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Mar 11 2011 Sun Apr 1 17:21:27 2012 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Apr 1 17:21:27 2012 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Sun Apr 1 17:21:27 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Sun Apr 1 17:21:27 2012 TUN/TAP device tun0 opened Sun Apr 1 17:21:27 2012 /sbin/ifconfig tun0 10.8.0.1 pointopoint 10.8.0.2 mtu 1500 Sun Apr 1 17:21:27 2012 GID set to openvpn Sun Apr 1 17:21:27 2012 UID set to openvpn Sun Apr 1 17:21:27 2012 UDPv4 link local (bound): [AF_INET]127.0.0.1:1194 Sun Apr 1 17:21:27 2012 UDPv4 link remote: [undef] Sun Apr 1 17:21:27 2012 Initialization Sequence Completed This creates a tun0 interface that looks like this: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And the netstat command still indicates the state is not set to LISTEN. On the client-side I've installed the p12 certs onto two devices (one is an android tablet, the other is an Ubuntu desktop). I don't see port 1194 as open either. Both clients install the cert files and then ask me for the L2TP secret (which was set on the file), but then they oddly ask me for a username and a password, which I don't know where I could possibly get those from. I attempted all of my logins, and some whacky guesses that were frantically pulling at straws. If there's any more information I could provide let me know.

    Read the article

  • How to debug solve 500 Internal error aws micro ec2 with suexec, Apache and php CGi

    - by Oudin
    I'm running WordPress multi-site on an amazon micro ec2 with suexec, Apache and php CGi On Ubuntu 12.04 However I've been experiencing a lot of Internal server 500 errors and I'm in the process of debugging it to find a solution. I've posted my error logs below example.com error.log: [Fri Oct 26 10:10:08 2012] [warn] [client 23.23.xxx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Fri Oct 26 10:10:08 2012] [error] [client 23.23.xxx.xx] Premature end of script headers: wp-cron.php [Fri Oct 26 10:50:04 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/ [Fri Oct 26 10:50:04 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin.php, referer: https://www.example.com/wp-admin/ [Fri Oct 26 10:58:14 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:15 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin-ajax.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:56 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:58:57 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: plugins.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:59:18 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 10:59:18 2012] [error] [client 190.213.xxx.xxx] Premature end of script headers: admin-ajax.php, referer: https://www.example.com/wp-admin/network/index.php [Fri Oct 26 11:01:49 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server, referer: https://www.example.com/wp-admin/ [Fri Oct 26 11:01:49 2012] [warn] [client 190.213.xxx.xxx] (104)Connection reset by peer: mod_fcgid: ap_pass_brigade failed in handle_request_ipc function, referer: https://www.example.com/wp-admin/ Apache Log: php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory Recipient names must be specified Recipient names must be specified php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory php (pre-forking): Cannot allocate memory [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2852 [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2851 [Fri Oct 26 10:49:33 2012] [warn] mod_fcgid: cleanup zombie process 2853 [Fri Oct 26 10:58:22 2012] [warn] mod_fcgid: process 2892 graceful kill fail, sending SIGKILL php (pre-forking): Cannot allocate memory [Fri Oct 26 10:59:21 2012] [warn] mod_fcgid: process 2894 graceful kill fail, sending SIGKILL [Fri Oct 26 10:59:25 2012] [warn] mod_fcgid: process 2866 graceful kill fail, sending SIGKILL suexec.log: [2012-10-25 16:05:36]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:09:38]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:09:51]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:03]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:06]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 18:14:35]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:27]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:29]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 20:20:31]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 21:42:12]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-25 22:56:50]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 02:34:43]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 04:25:07]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 06:35:19]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 06:40:05]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 07:22:45]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:10:05]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:49:24]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi [2012-10-26 10:49:24]: uid: (1002/username) gid: (1002/username) cmd: php-fcgi based on the logs can any determine what might be the cause of this? Thinking that it might be the micro instance I'm thinking of upgrading to a small. Any help would be greatly appreciated.

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

< Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >