Search Results

Search found 12283 results on 492 pages for 'tcp port'.

Page 411/492 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Proxy to either Rails app or Node.js app depending on HTTP path w/ Nginx

    - by Cirrostratus
    On Ubuntu 11, I have Nginx correctly serving either CouchDB or Node.js depending on the path, but am unable to get Nginx to access a Rails app via it's port. server { rewrite ^/api(.*)$ $1 last; listen 80; server_name example.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:3005/; } location /ruby { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:9051/; } location /_utils { proxy_pass http://127.0.0.1:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; # buffering would break CouchDB's _changes feed } gzip on; gzip_comp_level 9; gzip_min_length 1400; gzip_types text/plain text/css image/png image/gif image/jpeg application/x-javascript text/xml application/xml application/x ml+rss text/javascript; gzip_vary on; gzip_http_version 1.1; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; } / and /_utils are working bu /ruby gives me a 403 Forbidden

    Read the article

  • What is the correct iptables rule when NATing multiple private subnets?

    - by Jose Mendez
    I have a Centos minimal 6.5 acting as a router. eth0 is connected to a Cisco switch trunk port, allowing VLANs 200-213. I have several VLAN interfaces just as this link suggests: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-networkscripts-interfaces_802.1q-vlan-tagging.html And have IPv4 forwarding, so all my network devices from any of the networks 200-213 can communicate with each other using this linux box as their router. Problem is, I need them to access the Internet, so I added the following rule: iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -j SNAT --to 1.1.1.56 1.1.1.56 is the "outside" address. This works fine, devices connected to the internal networks can ping Intertnet addresses BUT, they stop being able to talk to each other across subnets, so 192.168.211.55 can ping 8.8.8.8, but can't talk to 192.168.213.5. As soon as I do a service iptables restart to remove the rule, I can start talking across internal subnets again. What would be the correct way to set up NAT for multiple private subnets? Or maybe the correct way to set up forwarding?

    Read the article

  • ProCurve network expansion

    - by Blue Warrior NFB
    I've hit a bit of a wall with our network scale-out. As it stands right now: We have five ProCurve 2910al switches connected as above, but with 10GbE connections (two CX4, two fiber). This fully populates the central switch above, there will be no more 10GbE Ethernet connections from that device. This group of switches is not stacked (no stack directive). Sometime in the next two or three months I'll need to add a sixth, and I'm not sure how deep of a hole I'm in. Ideally I'd replace the core switch with something more capable and has more 10GbE ports. However, that's a major outage and that requires special scheduling. The two edge switches connected via fiber have dual-port 10GbE cards in them, so I could physically put another switch on the far end of one of those. I don't know how much of a good or bad idea that would be though. Is that too many segments between end-points? Some config-excerpts: Running configuration: ; J9147A Configuration Editor; Created on release #W.14.49 hostname "REDACTED-SW01" time timezone 120 module 1 type J9147A module 2 type J9008A module 3 type J9149A no stack trunk B1 Trk3 Trunk trunk B2 Trk4 Trunk trunk A1 Trk11 Trunk trunk A2 Trk12 Trunk vlan 15 name "VM-MGMT" untagged Trk2,Trk5,Trk7 ip helper-address 10.1.10.4 ip address 10.1.11.1 255.255.255.0 tagged 37-40,Trk3-Trk4,Trk11-Trk12 jumbo ip proxy-arp exit

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - 192.168.109.15 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Any help?

    Read the article

  • LAN network (switch?)

    - by guywhoneedsahand
    I am working on setting up the network for a small LAN party (less than 16 people). Most of them do not have wireless cards in their rigs, so I need to set up some way for everyone to a) play LAN games and b) access the internet. The LAN party will probably take place in my basement, where I have enough space. However, the basement is not wired up with the router which is actually on the floor above. I make a cantenna a while back that can boost the wireless performance of my computer significantly. How can I use this to provide internet and LAN to guests? My hope was that I could use a switch like this http://www.newegg.com/Product/Product.aspx?Item=N82E16833181166 for the LAN - but how can I give people access to the internet? Is there such thing as a network extender / 16-port switch? Obviously, the internet performance doesn't need to be super stellar, because the games will be using LAN - so I am looking to provide some usable internet for web browsing, and very high speed LAN for games. Thanks!

    Read the article

  • 1000Base-X layer 2/MAC address details

    - by user69971
    A layer 2 Ethernet frame is sent with a source and destination MAC address. Given a 100Base-TX (copper) trunk between two Cisco switches, I can do a "show interface fa 0/0" on S1 to see the MAC address assigned to the trunking interface, then go to Switch2 and do a "show mac address-table" and find the MAC address of the S1 fa 0/0 interface as a dynamically learned MAC in the table. Given a similar setup with a 1000Base-X (fiber GBIC) trunk, the MAC address shown in "show interface gi 0/0" on S1 does not show up in the MAC address-table of S2. Everything I can find online indicates that 1000Base-X uses largely the same layer 2 format as copper connections. There's some slight alterations - minimum frame size is slightly larger - but the fundamentals of the frame structure appear to be the same, including transmission with a source and destination L2 address. Why doesn't the address of gi 0/0 show up in the MAC address table of the connection switch? The only thing which seems to make sense would be that the GBIC has its own MAC address, almost as if its acting as a mini 2-port switch or hub, with the switch-assigned MAC address showing up on the interface connection and a different MAC address assigned to the fiber side. If this is the case, is there any way to see the GBIC MAC address on the switch? (I've tried to look up the details in IEEE 802.3z but it doesn't seem to be available without an IEEE membership or purchasing the standard. I find the base 802.3 PDFs for download, but not 802.3z.)

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • No Outbound Internet on Windows Home Server

    - by Kyle B.
    Could someone provide some steps for me to check my internet connection on my Windows Home Server? It seems to have intermittent connectivity issues and I am unsure of how to diagnose the problem because it is a headless (no monitor, no keyboard) machine so the only way to get to the device is via remote desktop (which works fine). When connected to the machine, it doesn't pull up any microsoft.com sites and some other sites it does pull up (i.e. gmail.com) and some it doesn't (stackoverflow.com). To make matters more complicated, it has worked intermittently in the past for reasons unknown. Are there tools I can use to properly diagnose the reason for the connection failure? I can ping 127.0.0.1 just fine, I have internet working on my other router-connected machines, so I'm not sure why this one would fail. Any suggestions would be much appreciated and up-voted :) ** edit - thanks for suggestions guys, I'm going to try these tonight and will update my post. ** edit #2 - I hoping this is a more permanant fix, but I have both changed my port on the router as well as restarted the router at the same time. The internet (for the moment) appears to be working. I will be sure to try everything we have discussed should this problem persist. Thanks, Kyle

    Read the article

  • Apache2 MPM-prefork MPM-worker multiple instances on same Ubuntu host machine possible?

    - by user60985
    I have a live Apache2/MPM-Worker instance running Django. I want to also run an Apache2/MPM-prefork instance to run some Drupal6 applications on the same host machine and utilize a vast selection of PHP modules that run on the prefork model. I plan to use my MPM-worker instance to reverse proxy to the Apache2-prefork instance for URLS starting with myhost.com/drupal6/. It seems theoretically doable/configurable by having the second Apache2-prefork instance configured to listen on an internal port, say 127.0.0.1:8080 and having my current Apache2-worker configured to proxy pass and reverse pass to it for the 'drupal6' URLs. However, how do I compile or install the apache2-prefork version so it has a different executable name than /usr/sbin/apache2, for example /usr/sbin/apache2p, and so apache2ctl has a different name, say apache2pctl, and that apache2pctl invokes the /usr/sbin/apache2p instead of /usr/sbin/apache2... and so on down the line (eg /etc/apache2p) so I can start and restart my two instances independently? As I understand it, no one executable of 'apache2' can be compiled with both the MPM-prefork and MPM-worker modules, so it seems I need two separate versions of the apache2 MPM flavors. But then I need to invoke and control them by separate names, I assume. I looked at the configuration options for apache2 and I am a bit queasy about compiling a second apache2 version with prefork because I am not sure I can set all the options so that none of my current apache2 files is overwritten. Is there a way? Is there a standard solution to separately installing and controlling prefork and worker apache2 executables on the same machine without them stepping on each other during installation or operation?

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • DHCP not responding from laptop or router, but works on directly plugged PC?

    - by Matt H
    I'm at my sister in law's place in Singapore. I'm not from Singapore but am here for a few months. She has some sort of cable modem made by motorolla (SB5101 Surfboard). I think it goes, through starhub or similar provider. Anyhow, her PC is directly attached by cable (not wireless) and she can access the internet. There is no wireless router connected to it. The PC is configured with DHCP and appears to be working. However, the moment I unplug her PC and plug in my laptop, it doesn't get an address. The interesting thing here is that I also see this toredo tunnel adaptor etc. I'm not familiar with what that is. It appears to be being assigned an IP v6 address and an IP v4 address. I thought perhaps it's my laptop, but also when I plug in my DDWRT based router, it also fails to get a DHCP assigned address on the WAN port. I can't also seem to connect into any web configuration on the motorolla modem either. Any ideas? what kind of setup is this? all I'd like to do is plug in my wireless router so I can roam around the house and also access the internet.

    Read the article

  • CouchDB crashes at startup when path to config file has space(s)

    - by Barry Wark
    I'm hoping to run CouchDB as a per-user Launch Agent on OS X. I'm using the coucdbx-core folder from the CouchDB Server.app as the base of my CouchDB deployment. I'd like each user to have their own couch instance (on a different port), necessitating separate config files for each instance. The logical place to put these files is in ~/Library/Application Support/ for each user. I can put the entire distribution in ~/Library/Application Support/my-app/coucdbx, and put the .ini at ~/Library/Application Support/my-app/local.ini. Starting couchdb as bin/couchdb -a ../local.ini (from ~/Library/Application Support/my-app/coucdbx) works great. But I'd like to save every user the ~50MB couchdbx and install the couchdbx-core in a shared location (e.g. within my app's .app bundle). When I do this, the path to the per-user config file contains a space, and I get the following error when starting CouchDB: $ bin/couchdb -n -a ~/Library/Application\ Support/us.physion.ovation/default.ini {"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/Users/hs/prj/build-couchdb/build/etc/couchdb/default.ini","/Users/hs/prj/build-couchdb/build/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Is there any way to provide a config file at the command line, if that config file's path includes space(s)? Despite my best efforts in the mailing list archives, wiki and google, I haven't been able to find a solution or a definitive "it can't work". Any help greatly appreciated.

    Read the article

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • Cisco ASA and static IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • HTML Redirect issue with Apache2

    - by Vijit Jain
    I am facing an issue with the ProxyPass on my Apache server on Ubuntu. I have configured Apache to deal with Virtual Hosts on my server. There is an application with runs on the server and uses ports 8001 8002. I need to do something like www.example.com/demo/origin to display the contents that I would see when I visit www.example.com:8000. The contents to be displayed are a host of HTML pages. This is the section of the virtual host config that has issues ProxyPass /demo/vader http://www.example.com:8001/ ProxyPassReverse /demo/vader http://www.example:8001/ ProxyPass /demo/skywalker http://www.example.com:8002/ ProxyPassReverse /demo/skywalker http://www.example.com:8002/ Now when I visit example.com/demo/skywalker, I see the first page of port 8002, say the login.html page. The second should have been www.example.com/demo/skywalker/userAction.html, instead the server shows www.example.com:8000/login.html. In the error logs I see something like: [Mon Nov 11 18:01:20 2013] [debug] mod_proxy_http.c(1850): proxy: HTTP: FILE NOT FOUND /htdocs/js/demo.72fbff3c9a97f15a4fff28e19b0de909.min.js I do not have any folder htdocs in the system. This is only an issue while viewing .html pages. Otherwise, no such issue occurs. When I visit localhost:8001 it will show any and all contents without any errors or issues. www.example.com/demo/skywalker displays a separate webpage www.example.com/demo/origin displays a different webpage and www.example.com/demo/vader displays a different webpage. I have also tried to use one more type of combination, <Location /demo/origin/> ProxyPass http://localhost:8000/ ProxyPassReverse http://localhost:8000/ ProxyHTMLURLMap http://localhost:8000/ / </Location> This fails as well. I would greatly appreciate if anyone can help me resolve this issue.

    Read the article

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • IDE/PATA high-speed hard drive dock

    - by wfaulk
    I frequently need to access bare drives for backups and need a quick, high-speed way to deal with them. There are a multitude of SATA hard drive docks (for example), but I have a lot of IDE/PATA (hereafter "IDE") drives that I would like to be able to use similarly. There are IDE-to-SATA adapters so you can plug your IDE hard drive into a SATA port, so I don't see any reason why you couldn't use the same technology to have a native dock, yet none seems to exist. Now, I'm aware that 3.5" IDE drives do not have a specification for the layout of the connector, and therefore can't be slapped into a dock the same way a SATA drive could, but 2.5" PATA drives do. In fact, I'm not terribly interested in supporting 3.5" drives. It would be nice, but I deal with them far less frequently than 2.5" drives. Also, I'd very much like for the connection to the computer be faster than USB, preferably eSATA, I don't want to be spending time mounting a drive inside an enclosure, I don't want bare drives lying around with a cable hanging off of them, and I'd prefer a single dock rather than two. What seems like the ideal solution to me would be a regular SATA→eSATA dock and some sort of screwless adapter for IDE drives, but I'm open to any suggestions, regardless of my stated preferences, but which are, in some sort of order of preference: high-speed (faster than USB, at least) holder for drive (not just a cable) no complicated enclosure support for 3.5" IDE drives single dock Updates: Here's a 3.5" IDE to 3.5" SATA docking adapter that could be part of the solution. Weird. I figured that would be the impossible part. I was hoping to find something like this 2.5" to 3.5" SATA chassis that would take a 44-pin IDE drive internally. It looks like the Vantec EZ Swap EX comes awfully close. It has its own bay dock, but it looks like the SATA ports on the back are spaced properly, even if they're not aligned quite properly. Unfortunately, the proper position is at the very edge of the drive, which means that the docks' connectors are at the very edge of their recesses, which means there's no way to fit it in there.

    Read the article

  • Rename a Network Printer win Win 7

    - by Alex
    Seems this is a common question but the answers regularly miss the point. I have a server. Server has some printers connected. Server has all drivers for x32 & x64 OS PLUS ALL DEFAULTS set. Server also manages print queue. I have many workstations, all need to use the printers. All NEED to have drivers print queue and DEFAULTS propagated from server. Now.... When I add the printers on the workstations, I get: "ABC Printer on SERVER123" I need something less long - just "ABC Printer" So how? Tip: Please don't show me how to change the name of your locally installed printer. I know how to do this - I am particularly interested in shared printers that look like "ABC Printer on SERVER123" Tip: Installing the driver with a local port wont cut it because then I loose the server propagated defaults, the driver updates and I need to run around with driver disks/confuse trembling users with hard things like choosing drivers. I am happy for a hack if there is no official way to do this in the group policy.... I tried looking in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Printers on the workstation machines but those are only local printers :( I can see the network printer details on the workstations here: HKEY_USERS[Some GUID]\Printers\Connections But there is nothing obvious like a description string... Can anyone help with this?

    Read the article

  • Issue with InnoDB engine while enabling and [ skip-innodb ] - [ERROR] Plugin 'InnoDB' init function returned error

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • Cannot establish a network connect to VMWare Fusion VM

    - by twolfe18
    I am running Snow Leopard 10.6.2 (not the server edition) with VMWare Fusion 3.0.0 and I trying to get my Ubuntu 9.10 x86_64 VM working. I am using a bridged connection, and I have an internet connection FROM the Ubuntu VM (I can download updates, ping websites, etc), but I cannot connect TO the Ubuntu box from any other device on my network. I am trying to get Mongrel up on the Ubuntu VM for some Rails stuff, but it's not working. I know Mongrel/Rails is not the problem because if I start the server on the Ubuntu VM, background the process, and then wget the index page, it works. I just cannot connect to the site from another IP. I have tried using a static IP and a DHCP IP configuration on the Ubuntu VM, neither work (for incoming connections, both work for outwards). I have port scanned the Ubuntu VM, and it appears that all ports are closed. However, the Ubuntu VM does respond to pings. I noticed a similar question here: http://serverfault.com/questions/99757/setting-up-a-bridged-network-with-vmware-fusion, but no answer. Any ideas?

    Read the article

  • NetApp FAS270 head doesn't see disks

    - by wfaulk
    I have an FAS270C. For months, I've been running it in a split-head manner (that is, with each head serving data totally independently, and without any clustering even being enabled) in order to facilitate moving some data around. I finally got everything situated, moved all the data to one of the heads, and was trying to get clustering set back up. Now when I try to install OnTap onto the "new" head, it cannot see any of the disks in the head shelf. (That is, the shelf into which the heads are inserted.) I've booted into maintenance mode, and it shows me that the 0b adapter, which should be the adapter that that shelf and its disks should be presented on, is in "OFFLINE (physical)" state. If I try to enable it with either "storage enable adapter 0b" or "fcadmin online 0b", it waits for about 30 seconds and then says: [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b. [fci.adapter.online.failed:error]: Fibre Channel adapter 0b failed to come online. There is currently nothing attached to its external 0b port. I've tried it with and without an SFP plugged into it, and with and without its internal termination switch on. The currently active head can see those disks, and can see that two of them are assigned to the other head. Before I started reconfiguring, the "new" head could see disks on that shelf. They may even be the same disks that OnTap was installed on previously. Does anyone have any idea how to proceed?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >