Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 461/537 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • Oracle Linux screen freezes during installation

    - by Fearless
    I was installing Oracle Linux 6.4 on a server, and the screen suddenly froze. Here were the previous steps: I put in the disk, clicked install, checked the disk (no errors), did pre-install setup (clock, root password, host+domain name, etc.), configured two 40GB hard drives in a RAID1 array (no swap, 3100mb encrypted raid partitions, ~100mb ext4 partition mounting to /boot, encrypted ext4 RAID device with mounting to /), selected packages, hit continue. The system did its short preinstall processes, then when to the main installation screen with the long status bar. The installer proceeded like always, but around package 250 out of ~1000, the screen suddenly went black with a text cursor in the upper left corner of the screen and the mouse cursor in its previous place. Neither cursor moved and the only thing that triggered a response was a ctrl-alt-delete that rebooted it. I have run this in VMs before without this issue. Memtest hasn't reported anything, and the media check went smoothly. The machine has supported Ubuntu server without issues before. Any ideas? I have tried booting after that, but the grub bootloader tries to find fd0 for some reason (I have no idea why it would search for the floppy disk). UPDATE My server successfully installed, but won't boot up. I think that, for some reason, it is still using the old bootloader from the previous installation. Any ideas on how to fix that?

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • rDNS for SMTP server locally with Mail hosted by third party

    - by Zleviticus
    Ok We have a difference of opinion on something and wanted to get some expert advice. We host our mail with our main domain "OurDomain.net" with a third part mail provider. We have an in house application that has to be able to send mail out to our clients. The problem is that sometimes the mail is flaky and will stop users from functioning in the program for 30 sec or more and appears to lock up. We have determined that the issue is with the mail piece. One solution is to use Database mail to queue up outbound emails to send out. The other is to set up an intenal SMTP server and send out mail through it. My fear is that we wil not be able to get rDNS to work properly and most of the mail will be blocked by our various client spam filters. Is it possible to set up the DNS for the servers so that we can send mail out like [email protected] using the smtp server in house and still pass the rDNS parameters that are normally set on spam filters? enquiring minds want to know.

    Read the article

  • Is it logical that file system acls would be corrupted in a way that adds permission for another user?

    - by wilbbe01
    I was having issues on a shared hosting provider with the host's web server instance not serving some files. I asked the companies support about the issue and they responded with the results of getfacl on my home directory, and added the necessary line to allow their web server to obtain the necessary permissions. All is working happily now, but I noticed a line in the getfacl that was for what appeared to be another username to which I had no relation. I asked them about this and their response was that it was likely some minor corruption and that I could remove the unwanted line with the setfacl -x option. I know I never added the user to my home directory, and I also find it weird that that could truly happen due to corruption. So now that it is fixed I'm a little bit weary of whether or not they were trying to cover up a problem they accidentally gave someone permissions to my account, or if this kind of thing can really be corrupted in that way. Especially when that user is a real user on the same server. Any thoughts? Thanks.

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • How to delete massive files via ftp or ssh?

    - by spotlightsnap
    On my servers, one of the scripts that I have been using keeps creating the blank files at root and I haven't been noticed for more than over 6 months and now total files are created more than 500,000 files. I cannot access that directory through control panel because there were too many files and I can only access with ftp. Even with ftp, ftp truncated the files by 8000 each. So I have to keep deleting 8000 each. I tried to ask my host to delete it for me but they says they can't since it's the liability issues. So what I want to know is how can i delete all of those 500,000 files through ftp? Since it's shared hosting, I don't have SSH access either. Hosting provider says I can request the SSH access but need to verify it and their office closed until next week. So I am stuck with ftp for now. So please kindly let me know how can i delete massive files via ftp ? And incase, if i can get the ssh access, please kindly let me know how can i delete the files via ssh with efficient ways ? Filename are like this closecp.139619 closecp.139619.1 closecp.139620 closecp.139620.1 Thank you.

    Read the article

  • How can I password protect & let cgi-bin to work?

    - by jaaaaaaax
    This is taken from sites-available directory. It's a virtual host setting for apache. Accessing myiphere/cgi-bin/ throws 403. The directory setting for /var/www2/ drwxrwxrwx 8 www-data www-data NameVirtualHost myiphere <VirtualHost myiphere> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory>

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • iptables rules to allow HTTP traffic to one domain only

    - by Zenet
    I need to configure my machine as to allow HTTP traffic to/from serverfault.com only. All other websites, services ports are not accessible. I came up with these iptables rules: #drop everything iptables -P INPUT DROP iptables -P OUTPUT DROP #Now, allow connection to website serverfault.com on port 80 iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT #allow loopback iptables -I INPUT 1 -i lo -j ACCEPT It doesn't work quite well: After I drop everything, and move on to rule 3: iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT I get this error: iptables v1.4.4: host/network `serverfault.com' not found Try `iptables -h' or 'iptables --help' for more information. Do you think it is related to DNS? Should I allow it as well? Or should I just put IP addresses in the rules? Do you think what I'm trying to do could be achieved with simpler rules? How? I would appreciate any help or hints on this. Thanks a lot!

    Read the article

  • Nginx: Rewriting directory path to file

    - by Doug
    I'm a little new to Nginx here so bear with me - I want to rewrite a url like foo.bar.com/newfoo?limit=30 to foo.bar.com/newfoo.php?limit=30. Seems pretty simple to do it something like this rewrite ^([a-z]+)(.*)$ $1.php$2 last; The part that I am confused about is where to put it - I've tried my hand at a some location directives but I'm doing it wrong. Here's my existing virtual host config, where should I implement my rewrite? server { listen 80; listen [::]:80; server_name foo.bar.com; root /home/foo; index index.php index.html index.htm; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; } } Thanks!

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Cisco FWSM -> ASA upgrade broke our mail server

    - by Mike Pennington
    We send mail with unicode asian characters to our mail server on the other side of our WAN... immediately after upgrading from a FWSM running 2.3(2) to an ASA5550 running 8.2(5), we saw failures on mail jobs that contained unicode. The symptoms are pretty clear... using the ASA's packet capture utility, we snagged the traffic before and after it left the ASA... access-list PCAP line 1 extended permit tcp any host 192.0.2.25 eq 25 capture pcap_inside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface inside capture pcap_outside type raw-data access-list PCAP buffer 1500000 packet-length 9216 interface WAN I downloaded the pcaps from the ASA by going to https://<fw_addr>/pcap_inside/pcap and https://<fw_addr>/pcap_outside/pcap... when I looked at them with Wireshark Follow TCP Stream, the inside traffic going into the ASA looks like this EHLO metabike AUTH LOGIN YzFwbUlciXNlck== cZUplCVyXzRw But the same mail leaving the ASA on the outside interface looks like this... EHLO metabike AUTH LOGIN YzFwbUlciXNlck== XXXXXXXXXXXX The XXXX characters are concerning... I fixed the issue by disabling ESMTP inspection: wan-fw1(config)# policy-map global_policy wan-fw1(config-pmap)# class inspection_default wan-fw1(config-pmap-c)# no inspect esmtp wan-fw1(config-pmap-c)# end The $5 question... our old FWSM used SMTP fixup without issues... mail went down at the exact moment that we brought the new ASAs online... what specifically is different about the ASA that it is now breaking this mail? Note: usernames / passwords / app names were changed... don't bother trying to Base64-decode this text.

    Read the article

  • Our clients site is redirecting to a pill scammy site [closed]

    - by Alex Demchak
    Possible Duplicate: My server's been hacked EMERGENCY We've usually host our clients site, but we aren't hosting this one. The website itself (weddle-funeral.com) works just fine. if you load google and search for weddle funeral stayton oregon - and click that link, the site links to a scammy pill site. I went through the site and there were some php files in the wordpress plugins that got quarantined by my antivirus. I removed ALL non essential files, and uploaded fresh versions of all the plugins, but it's STILL redirecting from google. I tried logging in to the cpanel (on a virtual private server), and the cpanel flashed a red warning screen The site's security certificate is not trusted! You attempted to reach XXXXX.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site. (Keep in mind, that's for the HOSTING accounts CPanel) Is there something in the SERVER probably that's causing the redirect? EDIT: .htaccess file contents # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

  • Create new vms with a template with a csv. Possible?

    - by EdConde
    I am new to Powershell and Powercli... but i manager few ESX environments and really would like to do as much as possible via powershell. I am trying to do as much as i can via Powershell. On with the help I need: I used this one liner to create VMs from templates. But the problem is there has to be some user input after each new VM is created. New-VM name -Template template -VMHost VMHost -Datastore Datastore What i would like to do is be able to import via CSV the name of the new vm, the template to use, the host to put the new vm and the datastore all from a CSV. I don't know if it is as easy as below, but i kept getting errors. Import-Csv "C:\powershell\Data\VM2Create.csv" | Foreach-object{ New-VM $.name -Template $.template -VMHost $.VMHost -Datastore $.Datastore} I know there some () or {} or possibly | that need... just don't know where to put them... The csv i think would look like this: name, template, vmhost, datastore Any help or thoughts would be much appreciated...

    Read the article

  • How can I ensure that my static ip address is read from /etc/network/interfaces rather than dhcp?

    - by jonderry
    This is a follow up to the following question. I'm trying to set a static IP by changing /etc/network/interfaces to the following: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.2.133 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 8.8.8.8 and then running /sbin/ifdown eth0; /sbin/ifup eth0. However, the change in IP address doesn't appear to take effect without editing /etc/dhcp/dhclient.conf and commenting out the following before running ifdown; ifup: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers, dhcp6.fqdn, dhcp6.sntp-servers; Strangely, after commenting out this line, running ifdown; ifup works, but when I uncomment it, the behavior does not revert to the previous behavior of ignoring changes to my settings in /etc/network/interfaces (this doesn't seem like a problem, but I really need to be able to repeat this problem so that I can be confident that my solution is robust) Also, I'd rather not have to edit /etc/dhcp/dhclient.conf to change my static IP since it seems I should be able to do this by only editing interfaces. Can anyone explain the issues I'm seeing above and suggest the best way of making changes to static IP addresses take effect that admits reproducibility so that I can be sure that my approach works?

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • Migrate active directory to Google apps for business

    - by dewnix
    I've got a problem migrating active directory to Gapps. I'm stuck on google apps directory sync (GADS) where it just gives the error "java.lang.NullPointerException" after testing the connection during the LDAP configuration step. I checked the logs and I've pretty much determined that port 389 (standard LDAP port) isn't listening on the exchange server. I've tried telneting to it (from another machine in the same network) with no luck but I can telnet to other ports, that i know are open, successfully. I know they're open because I used portqry and netstat to see them. I'm suspecting that the active directory isn't even installed/running on this machine because there's no active directory services at all running on it. There's no active directory services that say they're NOT running either though. Is it possible AD is installed somewhere else? does it have to be on a machine inside the same network? I found the domain controller and it's host name and when I telnet with port 389, it works however GADS still gives me the same exact error when I substitute that server in. Actually, no matter what ridiculous settings i put into GADS, i still get that same NullPointer error. If i could get some different error than that NullPointer, i'd call that a successful day.

    Read the article

  • Setting up DNS using BIND

    - by dupdupdup
    i have troubles setting up my db files. Please kindly point me in the right direction! i need to define a nameserver that manage a domain example.org.au then i need it to have two records. one called server which is the ip address of current machine the other called www where www.example.org.au will be pointed to another ip address. i cant seem to get my system to work. This is my db.example.org.au file example.org.au. IN SOA server.example.org.au. ( 1; 3; 1h; 1w; 1h ) ; ; ;Host addresses localhost.example.org.au IN A 127.0.0.1 www.example.org.au. IN A 192.168.1.200 ; another virtual machine server.example.org.au IN A 192.168.1.199 ; current virtual machine If possible Please correct my errors! thanks! Any good guides out there? Thanks in advance ! :)

    Read the article

  • nginx + IIS + GET

    - by Eralde
    I have nginx on pc "A" & IIS with ASP.NET on pc "B". nginx is configured like this: ... location ~ ((Web|Script)Resource.*)$ { proxy_pass "B"/$1; proxy_redirect off; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header REQUEST_URI $request_uri; proxy_set_header HTTP_REFERER $http_referer; #proxy_set_header REQUEST_URI $request_uri; proxy_set_header QUERY_STRING $query_string; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }... but requests to "B"/WebScript?a=b&c=d aren't able to deliver GET data (a=b&c=d) to IIS part. Could anyone help with this? Edit: There's some additional info: nginx is also configured to proxy other data to Apache, running on "A" everything is fine there (at least GET is OK). configuration is the same as above, but for different location

    Read the article

  • using nginx with proxy_pass on a subdomain

    - by marcus3006
    a have a rails app that should listen on the subdomain redmine.example.com (using proxy_pass). all other requests for *.example.com should just redirect to a normal index.html. Here is my configuration: server { server_name www.example.com example.com; root /home/deploy/static/example; } upstream redmine { server unix:/tmp/redmine.socket fail_timeout=0; } server { # you could put a list of other domain names this application answers server_name redmine.example.com; root /home/deploy/rails/redmine/public; access_log /var/log/nginx/redmine_access.log; rewrite_log on; location * { proxy_pass http://redmine; } location ~ ^/(assets)/ { root /home/deploy/rails/redmine/public; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; } } anyone knows what's going wrong here? requests to example.com and www.example.com are handled correctly. when i try to acces redmine.example.com = "couldn't resolve host"

    Read the article

  • Is my current htaccess setting hurting SEO?

    - by user656002
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http (The https redirect must be done inside the virtual host file--I think). I don't have anything in the htaccess file for https RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • Email Proxy Ideas

    - by jtnire
    Hi Everyone, I wish to host some managed email servers for some customers. Each customer will have their own email server which will be an all-in-one virtual machine running postfix, dovecot and some webmail suite. Even though each customer will have their own server, I do not wish to give each email server it's own public facing IP. I wish to avail the use of proxy servers so all customers use the same public IP. As for the "smtp-in" from the public internet, this isn't a problem as I can set up many mx servers (using postfix) which will store-and-forward the mail to the correct server (using transport maps). As for the IMAP access from the customer, I was thinking of using perdition which is an IMAP proxy - I believe that this will suit my needs. I am confused however on what to use for the "smtp-out" proxy. The customers will have to authenticate with their receptive email server, however they will have to go via a proxy of some sort as they won't have direct access to their server instance. It probably can't be a store-and-forward proxy either. Does anyone have any idea on what I could use here? Many Thanks

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >