Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 677/792 | < Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Azure Virtual Machines - what fault tolerance do they provide?

    - by Borek
    We are thinking about moving our virtual machines (Hyper-V VHDs) to Windows Azure but I haven't found much about what kind of fault tolerance that infrastructure provides. When I run VHD in Azure, I've got two questions: Is my VHD and all the data in it safe? I think that uploaded VHDs use the "Storage" infrastructure so they should be automatically replicated to multiple disks and geographically distributed but should I still make a full-image backup just to be safe? (Note that of course I will be backing up the actual data inside VMs that I care about; I just want to know if there is a chance greater than 0.0000001% that one day I will receive an email from Microsoft telling me that my VM is gone and that I should create or restore it from scratch). Do I need to worry about other things regarding the availability of my VMs? I mean, when I have an on-premise server I need to worry about the hardware itself, about the host operating system, what would happen if my router failed, if my Hyper-V's C: drive failed etc. Am I right in thinking that with Azure, their infrastructure takes care of all of this? Thanks.

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Why can't a PC with 2 network cards be accessed by hostname?

    - by lewis
    I set up PC with 2 network cards, connected to the same LAN. I can connect to this PC (e.g. by remote desktop) only via ip-addresses. Accessing by hostname does not work. Why is this the case? UPDATE: Full environment 1. PC with 2 hardware network adapters. 2. On this PC installed VMWare Workstation. Created 3 VM's, networked by "bridged" network setting in VMWare. 3. In LAN all ip-addresses given from DHCP. 4. Win2k8 on all hosts (both physical and vitrual). As result: 1. PC has 2 ip-address (e.g. 192.168.1.71 and 192.168.1.72). PC available in LAN by ip-addreses, but not avail by hostname. 2. VM's has own ip-addr on each (e.g. 192.168.1.73, *74, *75 etc). They are available from LAN by their ip's, BUT not by their hostnames. How can I access to PC and to VM's by hostname?

    Read the article

  • PuTTY inserts random characters during a session

    - by Zachary Polikarpus
    I recently started renting space on a remote server so that I could work on a project. I found that a relatively painless way to access it on a windows machine is through PuTTY. However, there is one thing that has always irked me when using it: for seemingly no reason random characters are sometimes inserted at the cursor. Most of the time it is just a single tilde, but rarely it spits out what looks like some escape sequence ([[^8 or the like). It will only occur when I am focused on the window, whether I am typing or 20 feet away from the keyboard. If left for long enough, it will spit tildes at random intervals (average is about 1 minute). Finally, this behavior seems to be inconsistant when running programs such as nano or the mysql interface: in nano, instead of inserting tildes, it will set marks (ctrl-^); in mysql, lines will become un-editable. My question is this: Has anyone else experienced this sort of behavior in PuTTY? And if so, what can be done to prevent/correct this behavior?

    Read the article

  • Jailkit not locking down SFTP, working for SSH

    - by doublesharp
    I installed jailkit on my CentOS 5.8 server, and configured it according to the online guides that I found. These are the commands that were executed as root: mkdir /var/jail jk_init -j /var/jail extshellplusnet jk_init -j /var/jail sftp adduser testuser; passwd testuser jk_jailuser -j /var/jail testuser I then edited /var/jail/etc/passwd to change the login shell for testuser to be /bin/bash to give them access to a full bash shell via SSH. Next I edited /var/jail/etc/jailkit/jk_lsh.ini to look like the following (not sure if this is correct) [testuser] paths= /usr/bin, /usr/lib/ executables= /usr/bin/scp, /usr/lib/openssh/sftp-server, /usr/bin/sftp The testuser is able to connect via SSH and is limited to only view the chroot jail directory, and is also able to log in via SFTP, however the entire file system is visible and can be traversed. SSH Output: > ssh testuser@server Password: Last login: Sat Oct 20 03:26:19 2012 from x.x.x.x bash-3.2$ pwd /home/testuser SFTP Output: > sftp testuser@server Password: Connected to server. sftp> pwd Remote working directory: /var/jail/home/testuser What can be done to lock down SFTP access to the jail? FWIW, I mostly used this as a guide: http://digitalpatch.blogspot.com.ar/2010/03/openssh-daemon-hardening-part-3-setup.html

    Read the article

  • Setting up apache vhost for Icinga

    - by DKNUCKLES
    It's been a while since I've worked with Apache so please be kind - I'm also aware of this question but it hasn't been much help to me. I'd like to set up a simple vHost w/ Apache for my Icinga instance. Icinga is up and running and I can access it from x.x.x.x/icinga, however would like to be able to access it externally as well as internally. I have set up the /etc/hosts file and the following is my barebones vhost statement in httpd.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /usr/share/icinga ServerName icinga.domain.com ErrorLog logs/icinga.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> I also have the following in my .htaccess file <Directory> Allow From All Satisfy Any </Directory> An entry has been made for the instance in the Windows DNS server on my network, however when I try to access the site by URL I am greeted with Internal Server Error. Reviewing the /var/log/icinga.com-error_log I see the following entry. [Thu Dec 13 16:04:39 2012] [alert] [client 10.0.0.1] /usr/share/icinga/.htaccess: <Directory not allowed here Can someone help me spot the error of my ways?

    Read the article

  • Apache Virtual Hosts behind Cisco Router

    - by Theo
    I'm setting up an Apache 2.2 Ubuntu web server for internal services that is also supposed to be accessed from outside our LAN. Our LAN has a single external IP that is the external IP of our RV042 Cisco router. We have set up several A records on our external DNS server that point to this IP. Our internal DNS server resolve the same records to the internal IP of our web server, so computers from inside the network can access them using the same address as if they were outside. We forwarded the router's external 80 port to our web server's 80 port. I have set up one Virtual Host for each domain name in our list, and my httpd.conf is something like this: ServerName web.domain.com NameVirtualHost *:80 <VirtualHost *:80> ServerName alfresco.domain.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /alfresco http://localhost:8080/alfresco ProxyPassReverse /alfresco http://localhost:8080/alfresco ProxyPass /share http://localhost:8080/share ProxyPassReverse /share http://localhost:8080/share </VirtualHost> <VirtualHost *:80> ServerName crm.domain.com DocumentRoot /var/www/sugarcrm </VirtualHost> Now, this works if we are in our LAN. However, if we are outside of our LAN we reach our web server's default page saying: It Works! This is the default web page for this server. But we can't reach the virtual hosts, as if the domain name is not being preserved when the router forward the packets to the web server. Am I doing something wrong? How can I check what is going on? What should be the settings to make this work from outside?

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • Windows network routing

    - by fabianvilers
    Hi! I'm working by my customer premises and they let me connect my private laptop on a dedicated Wi-Fi for internet access. It's nice for external consultants. The only issue is that we can't connect on a remote server on port 25. I suppose this policy is set up to avoid infected computers sending spam from their network. As you can have guessed, this is something weird that I can't send mail at all. Fortunately, I've a 3G cell phone that I can connect by Bluetooth on my laptop. So when I want to send an e-mail, I have to disconnect from Wi-Fi, connect my phone, send the e-mail, disconnect phone and reconnect Wi-Fi. Kinda overhead. My question is: how can I tell Windows 7 to use the Wi-Fi for every out connection, but if it's a connection on port 25, use the cell phone network? With this solution, I could let my phone connected all day without having to switch again and again. Thanks a lot for your anwwers. Fabian

    Read the article

  • default domain and first domain in apache2 causing trouble

    - by acidzombie24
    I have 3 sites and a default/test site using mono's test page. I created aFirst, c, d, e, zLast. zLast has rewrite rules that should be evaluated last. Since the first VirtualHost seen is the default i set it to this --aFirst-- <VirtualHost *:80> ServerName www.domain.tld ServerAdmin webmaster@localhost DocumentRoot /var/www/test DirectoryIndex index.html index.aspx index.php MonoDocumentRootDir "/var/www/test" MonoServerPath rootsite "/usr/local/bin/mod-mono-server2" MonoApplications rootsite "/:/var/www/test" <Directory /var/www/test> MonoSetServerAlias rootsite SetHandler mono AddHandler mod_mono .aspx .ascx .asax .ashx .config .cs .asmx </Directory> </VirtualHost> The problem is my default page (the ip address of my server) and the first website (csite.ddomain.net) have problems (even though csite is defined in c and is not the first virtual host). The ip address of my server and csite.ddomain.net ALWAYS load the same site. Either monos test page or the csite. It flips every time i restart apache. Why isnt the server ip address always loading the default page (mono test page) and why isnt csite.ddomain.net always loading the site i want!?! Heres the config for --csite-- <VirtualHost *:80> ServerName csite.testdomain.net ServerAdmin webmaster@localhost ServerAlias s.csite.testdomain.net DocumentRoot /var/www/prjname DirectoryIndex index.html index.aspx MonoDocumentRootDir "/var/www/prjname" MonoServerPath rootsite "/usr/local/bin/mod-mono-server2" MonoApplications rootsite "/:/var/www/prjname" <Directory /var/www/prjname> MonoSetServerAlias rootsite SetHandler mono AddHandler mod_mono .aspx .ascx .asax .ashx .config .cs .asmx </Directory> </VirtualHost> aFirst, c, d, e, zLast are all enabled.

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • Best method to redirect internal DNS to external website?

    - by ProfessionalAmateur
    We host several web based applications outside of intranet. The URL's to these applications are long, complex and overall not user friendly. Ex: http://hostingsite:port/approot/folder/folder/login.aspx <-- (production) http://hostingsite:port22/approot/folder/folder/login.aspx <-- (dev) http://hostingsite:port33/approot/folder/folder/login.aspx <-- (test) I'd like to create an internal DNS entry to allow users to access these sites with ease. Ex: http://prod --> http://hostingsite:port/approot/folder/folder/login.aspx http://dev --> http://hostingsite:port22/approot/folder/folder/login.aspx I'm not familiar with the DNS process and setup, as far as I know a DNS can only be redirected to an IP, but not to subdomains for directory paths as described above? Is this a correct assumption? I am thinking for throwing up an internal webserver that will listen to the internal DNS entries and redirect to the external sites. http://prod --> [internal webserver] --> redirect --> http://hostingsite:port/approot/folder/folder/login.aspx Is there a better way to do this?

    Read the article

  • I just got a virus 6 mins ago, how? Situation.

    - by acidzombie24
    -Edit- for the people who say it isn't a virus. Norton does detect it as a virus, an icon was placed on my system tray and rkpg.exe is in my C: which was placed 6 min ago around the time my computer rebooted on its own causing me to lose data :@. Situation I on Windows XP, behind a Linksys router, I don't have DMZ on so nothing should be connecting to me. I had Firefox, MSN and Visual Studio opened. With C# I programmed a quick application to scan some pages with Internet Explorer. The site it was scanning was deviantART (which is pretty trustworthy), I doubt any banners there would hold a virus. I went to a suspicious site called freetxt.com but that was on Firefox and it didn't load the site. With an extra check I ping it and got this message "Ping request could not find host freetxt.com." The virus seems to be called braviax. Right now it brought up a message saying my computer may be infected? How on earth did it get in? I don't have uTorrent installed or any torrent or p2p applications. Nothing is installed on my computer that I haven't installed before and I know the exact time it installed because I see rkpg.exe on my C drive and my computer restarted on its own around the same time. For the previous 30 minutes actually the previous hour all I did was talk on MSN, not click any links (I went to freetxt on my own) and had that Internet Explorer thing running (which I programmed). How did it get in? I really doubt it came from a banner on deviantART and installed when I loaded the page with the webbrowser-control so something else may have happened? Is there any system defaults I should turn off? I have remote assistance off but even if it was on I shouldn't be infected due to the router not forwarding any ports?

    Read the article

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • How do I troubleshoot nginx not recognizing passenger?

    - by Jade
    Issue: nginx does not seem to recognize my rails application Symptoms: When the server starts up, it shows the "Welcome to nginx!" message instead of my Rails application. Nginx seems to be using the local nginx path instead of the Rails root I specified: 2010/04/18 06:29:06 [error] 783#0: *1 "/usr/local/nginx/html/blog/index.html" is not found (2: No such file or directory), client: 1.2.3.4, server: www.farmerjade.com, request: "GET /blog/ HTTP/1.1", host: "www.farmerjade.com" I used [RVM and Passenger Setup on NGINX][1] to install nginx and passenger on a virtual machine. Here is my nginx configuration: user farmerjade; worker_processes 1; ... http { include mime.types; default_type application/octet-stream; passenger_ruby /home/farmerjade/.rvm/bin/passenger_ruby; passenger_root /home/farmerjade/.rvm/gems/ree-1.8.7-head/gems/passenger-2.2.11; ... server { listen 80; server_name www.farmerjade.com; root /home/farmerjade/farmerjade/public; passenger_enabled on; rails_env development; ... I'd appreciate any help anyone has to offer -- I'm quite new to nginx.

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • Ubuntu - wireless connection works great but wired is totally dead

    - by Dan
    I am running Ubuntu 10.04 on my Acer Aspire One netbook. The wireless connection works great, but the wired is totally dead. When I plug the Ethernet wire, the little led next to the port doesn't blink. If I do ifconfig, this is the output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1659 errors:0 dropped:0 overruns:0 frame:0 TX packets:1659 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:132304 (132.3 KB) TX bytes:132304 (132.3 KB) wlan0 Link encap:Ethernet HWaddr 18:f4:6a:65:48:1f inet addr:192.168.1.7 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::1af4:6aff:fe65:481f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:94823 errors:0 dropped:0 overruns:0 frame:0 TX packets:81390 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:93028474 (93.0 MB) TX bytes:18002558 (18.0 MB) There is no eth0. Is that normal? In the "Network Connections" GUI there is an entry "Wired connection 1", its "MAC address" field is blank. How can I make the wired connection work?

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface? [migrated]

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

< Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >