Search Results

Search found 12484 results on 500 pages for 'host'.

Page 126/500 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • lsof not showing what port a proc is listening on

    - by ericslaw
    I have many processes on a box listening on several ports. I am trying to map ports to pids. The problem is that lsof is not telling me what ports belong to which process. Given an apache listening on port 80, I can see it listening via netstat: user@host% netstat -an|grep LISTEN|grep 80 *.80 *.* 0 0 49152 0 LISTEN But when I try to map port 80 to a pid I get nothing: user@host% lsof -iTCP:80 -t When I try seeing what sockets that specific pid is using I get: user@host% lsof -lnP -p31 -a -i COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME libhttpd. 31 0 15u IPv4 0x6002d970b80 0t0 TCP *:65535 (LISTEN) Notice the *:65535 in the NAME column. Does anyone know why lsof is not reporting the port in use? I am running as root. I am using a mix of lsof and os versions: lsof v4.77 on Solaris10 sparc lsof v4.72 on Redhat4.2 etc I know that linux solutions can use "netstat -p", so I guess I'm only looking for why solaris isn't working, but I find lsof is frequently silent and not showing me expected data.

    Read the article

  • Problem in accessing Windows shared folder on Ubuntu using terminal

    - by vikramtheone
    Hi Guys, Description I have 2 systems with me, one running on Windows(Host) and one on Ubuntu, both on a LAN. On the Windows(Host) I develop software intended for the Linux system and because the Linux system has little external memory, my idea to overcome this is by making the project folder on the Host side a Shared Folder with full access and access it on Ubuntu over the network. To achieve this, I have installed Samba on Ubuntu, when I go to Places -> Network I can see the shared project folder and I simply mount it. A link appears on the desktop. Next, using Nautilus I open the link and I can access the contents of the shared folder. Problem Even though I mount the shared project folder, I don't see it appearing in the /media or the /mnt folder, as a result of this I don't know what path to use to access this folder, from the terminal. For example: When, I mounted my USB stick, as expected, a link for the device appears on the Desktop and I also see a folder in the media folder. So, similarly, a mounted shared folder should have appeared on the /mnt folder, too. Can anyone suggest what I should do now? There are many posts around, but no solid solution for this problem. Help!!! :) Vikram

    Read the article

  • DNS Issue- Nameserver Issue

    - by Master-Man
    I setup new server on centos 5.3 and configure dns and hostnam using WHM. I also register my new nameservers with my domain registrar with ns1.example.com and ns2.example.com. But i am unable to ping hostname and NS. ping pc2.example.com or ping ns1.example.com I received the below email from server. IMPORTANT: Do not ignore this email. Your hostname (pc2.example.com) could not be resolved to an IP address. This means that /etc/hosts is not set up correctly, and/or there is no dns entry for pc2.example.com. Please be sure that the contents of /etc/hosts are configured correctly, and also that there is a correct 'A' entry for the domain in the zone file. Some or all of these problems can be caused by /etc/resolv.conf being setup incorrectly. Please check that file if you believe everything else is correct. You may be able to automatically correct this problem by using the 'Add an A entry for your hostname' option under 'Dns Functions' in your Web Host Manager. When I issue the command root@pc[~]# host pc2.example.com I receive the below error that Host pc2.example.com not found: 3(NXDOMAIN) I added A entries for hostname and Nameservers but nothing replies. Its almost more than 72 hours for setting & registering nameservers and dns configurations. thanks,

    Read the article

  • Unable to access newly created web site in IIS 7.5

    - by Animesh
    Configuration: 32-bit Windows 7 development machine with IIS 7.5 I created a new web site in IIS to host only MVC sites called MVCHOST. The physical path to this website is set as C:\inetpub\mvcroot. I created a new v4.0 pool called mvcpool for this purpose. I have given Modify rights to IIS_WPG, IIS_IUSRS, ASPNET accounts. I created this web site with a host header "mvchost" and port 80, in the hopes of browsing MVC sites in the following way: mvchost/mvcapp1 mvchost/mvcapp2 instead of localhost/mvcapp1 localhost/mvcapp2 The only binding I set is the default one: http:*:80:mvchost. I have also copied the files iisstart.htm, web.config, welcome.png and folder aspnet_client from wwwroot over to mvcroot. Now when I try to the browse this site from IIS manager, I get the following error: This webpage is not available If I leave out the host header and give some port, say 99, I can access this website at localhost:99. What am I missing here? Why am I unable to access the web site at: http://mvchost/?

    Read the article

  • Gathering buslogic SCSI hardware and virtual machine operating system

    - by Julian
    I'm trying to use Powershell to get SCSI hardware from several virtual servers and get the operating system of each specific server. I've managed to get the specific SCSI hardware that I want to find with my code, however I'm unable to figure out how to properly get the operating system of each of the servers. Also, I'm trying to send all the data that I find into a csv log file, however I'm unsure of how you can make a powershell script create multiple columns. Here is my code (almost works but something's wrong): $log = "C:\Users\me\Documents\Scripts\ScsiLog.csv" Get-VM | Foreach-Object { $vm = $_ Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } | Foreach-Object { get-VMGuest -VM $vm } | Foreach-Object{ Write-output $vm.Guest.VmName >> $log } } I don't receive any errors when I run this code however whenever I run it I'm only getting the name of the servers and not the OS. Also I'm not sure what I need to do to make the OS appear in a different column from the name of the server in the csv log that I'm creating. What do I need to change in my code to get the OS version of each virtual machine and output it in a different column in my csv log file? EDIT: Here's a more in depth look at things I've tried that have all failed: Get-VM | Foreach-Object { $vm = $_ $svm = Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic" } Foreach-Object {get-VMGuest -VM $svm } | Foreach-Object{Write-output $svm >> $log} } #Get-VM | Foreach-Object { # $vm = $_ # Get-ScsiController -VM $vm | Where-Object { $_.Type -eq "VirtualBusLogic"} #| write-host $vm # | Foreach-Object { # # #get-VMGuest -VM $_ | # #write-host $vm # #get-VMGuest -VM $vm } | Foreach-Object{ # #write-output $vm.VmName >> $log # #write-output $vm.guest.VmName, get-VmGuest -VM $vm >> $log NO GOOD # # Write-host $vm.Guest.VmName #+ get-vmGuest -vm $VM >> $log # # # } # } I'm not sure why get-VmGuest fails though. I'm getting the scsi hardware, filtering the hardware to only get buslogic, and then wanting to get the operating system of just the filtered VMs. I don't see where my code fails though.

    Read the article

  • Hadoop is not able to find JAVA_HOME properly

    - by Shekhar
    I am trying to run hadoop my Ubuntu OS. I have set JAVA_HOME variable in ~/.bashrc file to /usr/lib/jvm/jdk1.7.0_01/ but when I run hadoop namenode -format command it fails with following errors : shekhar@ubuntu:/usr$ hadoop namenode -format Warning: $HADOOP_HOME is deprecated. /host/Shekhar/Softwares/hadoop-1.0.0/bin/hadoop: line 321: /usr/jdk1.7.0_01/bin/java: No such file or directory /host/Shekhar/Softwares/hadoop-1.0.0/bin/hadoop: line 387: /usr/jdk1.7.0_01/bin/java: No such file or directory hadoop tries to locate java command at /usr/jdk1.7.0_01/bin/ path. Clearly somehow it missed /lib/jvm folder. I am not able to understand why and how this is happening. my echo $PATH command gives following output : shekhar@ubuntu:/usr$ echo $PATH /usr/lib/jvm/jdk1.7.0_01/bin:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/lib/jvm/jdk1.7.0_01/bin:/host/Shekhar/Softwares/hadoop-1.0.0/bin If I run which java command I get following output : shekhar@ubuntu:/usr$ which java /usr/lib/jvm/jdk1.7.0_01/bin/java and echo $JAVA_HOME returns following output : shekhar@ubuntu:/usr$ echo $JAVA_HOME /usr/lib/jvm/jdk1.7.0_01 I would like to know why hadoop is taking JAVA_HOME path incorrectly. Please help...

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

  • Subdomain not hitting new site?

    - by Abe Miessler
    I have an existing site at my.site.com and I would like to setup a staging subdomain (staging.my.site.com). I have my DNS setup and directing staging.my.site.com to my server, but for some reason the Web Site I created in IIS for it is not being hit. Instead when I go to staging.my.site.com it takes me to the original site. The site I created in IIS has a home directory that is totally different from the regular sites home directory. I have added one host header with the following information: IP Address: (All Unassigned) TCP port: 80 Host Header Value: staging.my.site.com I was under the impression that with the setup I described above hitting staging.my.site.com through a web browser would bring up the staging site, but it does not. Can anyone see what I am doing wrong? UPDATE: One thing I noticed is that in my A record I am mapping to the IP Address ( lets say 1.1.1.1 for this example). In the host headers for the main site (my.site.com) it has 1.1.1.1 as an entry. Is this normal? Could it cause the problem I am talking about?

    Read the article

  • How to point any *.mydomain variation to localhost (for development)?

    - by user41339
    Hi all. I am developing a site, which will make use of any given [variation of] subdomain name part (that is, the part prefixed before the host name and, optionally, the TLD part). I would imagine that in production, that would be an easy feat - make sure the DNS for second-level domain name part points to an IP, set up Apache2 virtual host to listen on that (or any) IP port 80, and just use PHP to make decisions based on the "Host" request header. However, currently the site is localhost, since I am developing it using my workstation, so first I patched the /etc/hosts to include: 127.0.0.1 mydomain I only used one name part (arguably a custom TLD) so as to not interfere with the Internet domain names. Then I set up a VirtualHost directive for Apache 2.2 like: <VirtualHost *:80> ServerName mydomain But now I can see that f.e. example.mydomain does not point to localhost, meaning the the /etc/hosts addition is not effective for "something.mydomain". It appears the rules are taken verbatim, and also I have checked that wildcards like *.mydomain are not allowed. Is there a solution for this?

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • Mongo daemon restarts after replica set issue

    - by Matt Beckman
    We had a recent election in our replica set (2 read nodes; 1 write node) that changed the primary node. Curious as to why this occurred, I started looking through the logs to find out what happened. It appears that mongoNode2 could not communicate with mongoNode3. When both nodes could not communicate, it appears that this caused the services on mongoNode2 and mongoNode3 to restart, eventually resulting in a new primary after the services had been started again. Thu Jun 23 08:27:28 [ReplSetHealthPollTask] DBClientCursor::init call() failed Thu Jun 23 08:27:28 [ReplSetHealthPollTask] replSet info mongoNode3:27017 is down (or \ slow to respond): DBClientBase::findOne: transport error: mongoNode3:27017 query: { \ replSetHeartbeat: "myReplSet", v: 3, pv: 1, checkEmpty: false, from: \ "mongoNode2:27017" } Thu Jun 23 08:27:29 got kill or ctrl c or hup signal 15 (Terminated), will \ terminate after current cmd ends Thu Jun 23 08:27:29 [interruptThread] now exiting Thu Jun 23 08:27:29 dbexit: Is there any reason that the mongo service would restart due to a DBClientCursor::init call() failure? Is this a known bug? It should be noted that mongoNode2 and mongoNode3 are VMs on the same VMware host. MongoNode1 is not on the same host, and it did not have any issues with the service. However, I did not have any other reports of issues with other VMs on the VMware host.

    Read the article

  • Best Practice - SQL 2012 & IIS in VMWare

    - by Dan Ribar
    We are pretty new to VMWare and looking for some thoughts on our environment. We have a VMWare cluster that has on one host: VM#1: MS Windows 2008 R2 Enterprise & SQL Server 2012 VM#2: MS Windows 2008 R2 Standard & IIS The IIS asp.net app talks directly to the SQL Server. We had this similar environment on physical servers a few months ago and just recently moved to the virtualized environment. Regarding the setup, we have not tweaked any of the vm resource parameters -- all is set as standard and all is working. What is observed is that the VMs seem to spool down and we get lags in response. Of course this sin't as fast as the old physical environment, but I am wondering if: *is it a good idea to run the SQL server and the IIS server on the same host? They are the only two VMs on it. The host is a new Dell R620 with 192 gb mem. does it make sense to change any CPU or memory reservations when it doesn't seem like there is any contention is there a way to keep the VMs spooled up to eliminate delays? This is a brand new squeaky clean vanilla install. What are your thoughts?

    Read the article

  • How to connect to a remote server and run some code on that particular server?

    - by seedeg
    I am implementing an automated backup scheme so I created a shell script which first creates SQL Dumps for all MySQL databases, then it retrieves all the websites from the /var/www of a remote server. The latter is working as I am using rsync to get the remote files. However, obviously, the MySQL dumps being retrieved are the ones on the local server which is not what I want. I want to get the SQL Dumps from the remote server as well. I have a tunnel between the local and remote server which I can connect without using any password (I added the public key to the authorized_hosts), so I tried to add the following code to the script: ssh [email protected] Then I tried to retrieve the SQL dumps and then I exit from the remote server. However this does not work as I still have to enter exit manually in the terminal for the SQL dumps to be retrieved from the remote host. I don't know why this is happening. Basically this is what the script is trying to do: //connect to remote server ssh [email protected] //retrieve SQL dumps //code to retrieve... //exit from remote server exit //use rsync to get remote files of /var/www from local server (working) Is there a way to connect to the remote host AND run the script's code ON THAT remote host? Many thanks in advance

    Read the article

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • Windows 8 x64 with VMWare Workstation or inside ESXi

    - by Dommer
    I need to run several virtual machines on a core i7-920 box with 12GB or RAM and a 256GB SSD to host the VMs. It also has a Highpoint RocketRaid 2720SGL RAID controller with a 12TB RAID 5 array. I want one of my VMs to run Windows 8 x64, to have access to the RAID array as a native disk (not as networked drives and it needs to run at full speed) and to be able to send files quickly across the network. Initially I thought I'd try to do this using ESXi 5, but I have been unable to find any working RAID drivers for the RR2720SGL and it is not on the HCL for ESXi 5. In light of this, I have installed Windows 8 x64 on the hardware and am thinking of installing VMWare Workstation and running my VMs inside there. I guess my questions are these: How does VMWare Workstation 9 perform compared to ESXi 5? In the real world I mean? Presumably installing Win 8 as the host OS will give me way better performance for that Win 8 machine than Win 8 running under ESXi? I should stick with Windows 8 x64 as the host OS, right? If I install a domain controller VM inside my Win 8 box and join the Win 8 machine to that domain, am I insane (I would guess the Win 8 machine wouldn't see the domain controller until it finished starting everything up, but I don't think that matters)?! is it feasible to give metrics like this and if so, what is the likely value of x? 25%? 50%? 75%? Win 8 under ESXi runs x% as fast as Win 8 installed bare metal.

    Read the article

  • Identifying Service Error in Fedora 16

    - by Cerin
    How do you find the cause of a failed service start in Fedora 16? The new systemctl command in Fedora 16 seems to horribly obscure any useful logging info. [root@host ~]# systemctl start httpd.service Job failed. See system logs and 'systemctl status' for details. [root@host ~]# systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; enabled) Active: failed since Thu, 21 Jun 2012 16:26:56 -0400; 1min 23s ago Process: 2119 ExecStop=/usr/sbin/httpd $OPTIONS -k stop (code=exited, status=0/SUCCESS) Process: 2215 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=1/FAILURE) Main PID: 1062 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service So the first command fails...and it tells me to run another command...which simply tells me that the command returned an error code. Where's the actual error? Even more frustrating is nothing seems to have been written to the logs: [root@host ~]# ls -lah /var/log/httpd/ total 8.0K drwx------. 2 root root 4.0K Jun 21 16:19 . drwxr-xr-x. 21 root root 4.0K Jun 20 16:33 .. -rw-r----- 1 root root 0 Jun 21 16:19 modsec_audit.log -rw-r----- 1 root root 0 Jun 21 16:19 modsec_debug.log

    Read the article

  • Performance tweaks and upgrades for VMWare Server 2

    - by sjohnston
    Our software department has a server running VMWare Server 2. We typically have 8-10 VMs running as test environments (Win XP and Server 08) for various versions of our software, and one VM that is used as a build server (Win XP). The host is running Server 2003 R2. It has 32GB RAM, 8 core Xeon 3.16GHz CPU, one disk for host OS and two raid disks for VMs. The majority of the time, this setup behaves very well and there are no complaints. Other times, the VMs can be very laggy. This is sometimes, but not always, correlated to heavy load on the build server. I'm a software developer, not an IT pro, but it seems to me that this machine should be beefy enough to handle this many VMs. Is this occasional performance hit likely just because we're hitting the limits of the hardware, or should I be looking for another culprit? From what I've read, I'm guessing if there's a bottleneck, it's probably disk I/O with all these VMs running off two disks (especially the build server). Would spreading the VMs over more disks, and/or switching to SSDs give us a significant performance boost? Other things I've read may increase performance: single virtual processor per VM removing/disabling unused virtual hardware preallocated disk space not using snapshots setting a reserved memory limit on the host and disabling VM memory swapping Can anyone confirm or deny if any of these improve performance? What other good tweaks have I missed?

    Read the article

  • Windows VirtualBox failed to attach USB device to Linux Guest

    - by joltmode
    I have Windows 7 64bit Host system, and I am using VirtualBox 4.1.18 (r78361). I have an Arch Linux Guest OS. I have installed VirtualBox Extension Pack (to enable USB2 support) and added my USB device filter to VM. I have also installed the Guest Additions provided by Arch: virtualbox-archlinux-additions (but I have no idea whether it's actually needed for my environment). I can see my USB device from VirtualBox Devices menu. Whenever I am trying to access it, I end up with: Failed to attach the USB device Kingston DT 100 G2 [0100] to the virtual machine Archlinux. USB device 'Kingston DT 100 G2' with UUID {a836ec33-0f41-4ca7-a31d-09cceaf5d173} is busy with a previous request. Please try again later. Details ? Result Code:    E_INVALIDARG (0x80070057) Component:      HostUSBDevice Interface:      IHostUSBDevice {173b4b44-d268-4334-a00d-b6521c9a740a} Callee:         IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} From what I have googled, most guides shows how to solve this the other way around - Linux Host to Windows Guest. How do I resolve this? Update I have tried to Eject (virtually, not physically) the device from my Windows Host system and then try to access the Device from Guest. Same error.

    Read the article

  • Apache: How to enable Directory Index browsing at the Doc Root level?

    - by Brian Lacy
    I have several web development projects running on Fedora 13. I generally setup Apache to serve my larger projects as Virtual Hosts, but I've got several small projects cycling through that I don't really care to setup a VirtualHost for each one. Instead I'd like them all under a subdirectory of the main VirtualHost entry. I just want Apache to serve me the directory index when I browse to the host name. For example, the hostname projects.mydomain.com refers to /var/www/projects, and that directory contains only subdirectories (no index file). Unfortunately when I browse to the host directly I get: Forbidden You don't have permission to access / on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. But my virtual host entry in my apache config looks like this: <VirtualHost *> ServerName projects.mydomain.com DocumentRoot /var/www/projects <Directory "/var/www/projects"> Options +FollowSymlinks +Indexes AllowOverride all </Directory> </VirtualHost> What am I missing here?

    Read the article

  • Nginx fails upon proxying PUT requests

    - by PartlyCloudy
    Hi. I have an arbitrary web server that supports the full range of HTTP methods, including PUT for uploads. The server runs fine in all tests with different clients. I now wanted to set this server behind an nginx reverse proxy. However, each PUT request fails. The entity body is not forwarded to the backend web server. The header fields are sent, but not body. I searched the nginx proxy documentation and find several hints that PUT might not be supported. But I also found people running svn/ web dav stuff behind nginx, so it should work. Any ideas? Here is my config: server { listen 80; server_name my.domain.name; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; } } Client == HTTP PUT ==> Nginx == HTTP Proxy ==> Backend Server The error.log shows no entries concerning this behaviour. Thanks in advance!

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • One dns server in different subnets

    - by hofmeister
    I have installed a small Linux server; the server is in a different subnet as the internet hosts. I added a route to my nat router to create a connection between both subnets. In both subnets I use an extra dhcp Server. Subnet A: 192.168.0.0/26 Subnet B: 192.168.1.0/26 Router: 192.168.0.1, Server in A: 192.168.0.62, Server in B: 192.168.1.62 internet ____ nat router ___ (Sub A)___ internet hosts | |____(Sub B)___ other hosts I could ping every host. Also the hosts which are connected to the subnet b, has internet connection. But sadly I have a problem with the dns server. I use the dnsServer from my nat router, I set the dns Server for subnet b to the ip 192.168.0.1, but every dns entries are equal with the hostname from my linux server. Example if the hostname from the server is test Test 192.168.0.62 //Server subnet a Test-2 192.168.1.62 //Server subnet b Test-2-2 192.168.1.1 //host a Test-2-2-2 192.168.1.2 //host b Any idea what went wrong? The internet dns resolution works fine.

    Read the article

  • Disable or remove filter driver for single HID device

    - by snoopen
    Running Windows XP in a corporate setting here. I have an issue where a filter driver is interfering with the functionality of different USB HIDs. For example graphics tablets do not respond while the filter driver is in place. I've also had the issue with foot pedals used with transcription software. My question is really two fold: A) what makes Windows use a filter driver on one HID but not another? B) when a filter driver is causing conflicts how can I disable it on the affected devices? Background I've previously narrowed down the issue to the filter driver by uninstalling the software (Funk Proxy Host) responsible for the filter driver. The software is a type of RDP we use here at work. (I might have even booted into safe mode and renamed the file, I forget). I believe the filter driver is present to disable or modify the use of the local keyboard and mouse while admin staff are assisting users. Either way I don't have the authority to just go uninstalling this software. As far as I can tell the software versions are the same, however I'm not sure if the device driver definitions are all the same as I don't know where these things would be located. To check for the presence of the filter driver I locate the hardware device in Device Manager, click Properties Driver tab Driver Details.... It shows up as ph32ihid.sys. Even though all machines are meant to have the same SOE and do have Funk Proxy Host installed I don't always have issues with the same HIDs. A few machines here the foot pedals without any issues. I've not had any machines work with the graphics tablet without uninstalling Funk software. Driver details I've just read up a bit more about filter drivers and found the drivers description in the registry under "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ProxyHostHIDFilter" There it's called "Kernel-mode HID filter driver for the Proxy Host". Presumably I could also disable it here but that would be system wide which is probably not desirable?

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • VMWare Workstation 8 Disk I/O & Hard Faults

    - by Scott
    I have VMWare Workstation 8 installed on a host machine with the following specs: Intel i5 2500k CPU 16GB DDR3 1600 ram 1TB Western Digital Caviar Black HD I have two Windows 7 virtual machines configured (currently running one at a time but will be operating both at once when my 32GB RAM kit arrives in a couple days). Each one is configured with 8GB of RAM and no tweaks/performance customizations or anything done. All of the VMWare settings are the defaults. When I boot into these machines and run various programs (Visual Studio, Outlook, etc), I can hear the disk thrashing quite a bit and checking Resource Monitor, I can see that I'm getting anywhere between 300-800 hard faults per second. From the host machine, it shows they're coming from the VMWare image. If I go to the virtual machine, whatever app I'm currently loading is the image that's causing the hard faults. As I understand it, hard faults are (simply) when an address in memory has been swapped out to the page file and has to be read from the page file instead of from memory. I don't understand why this is happening though. With 8GB of ram on the guest machine and 6.5GB available, what could be causing this? I know Windows 7 supposedly improved on page file management over XP but it seems excessive for this kind of slowdown, disk thrashing and high hard fault count when I have that much free RAM. Is there anything I can to to improve the performance in my guest machines? On the host machine, I can open/run any applications at all and hard faults stays around 0 with low disk I/O.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >