Search Results

Search found 56562 results on 2263 pages for 'gerald fauteux@oracle com'.

Page 1113/2263 | < Previous Page | 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120  | Next Page >

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Multiple static WAN IP addresses to single LAN subnet

    - by Jessy Houle
    Below is my home network topology. I currently have 5 static IP addresses, 3 of which are in use by 3 routers. These routers in-turn subnet internal networks and port forward. I use my SSL VPN appliance to remote home from work or on the road. At this point I can remotely administer my Windows Server. I know the network is setup wrong, I was matching existing hardware the best I knew how. http://storage.jessyhoule.com.s3.amazonaws.com/network_topology.jpg Ok this said, here is the problem... One of my websites on my Windows Server now needs to be secure (SSL using port 443). However, I'm already port forwarding port 443 to my VPN appliance. Furthermore, if I'm going to have to reconfigure the network, I would really like to be able to use the SSL VPN to remotely administer all machines. I mentioned this to a friend of mine, who said that what I was looking for was a firewall. Explaining that a firewall would take in multiple static (WAN) IP addresses, and still allow all internal devices to be on the same network. So, basically, I could supply my SSL VPN appliance it's very own static (WAN) IP address routing, and yet have it on the same internal network (192.168.1.x) as all my other devices. The first question is... Does this sound right? Secondly, would you suggest anything different? And, finally, what is the cheapest way to do this? I am started down the road of downloading/installing untangle and smoothwall to see if they will do the job, hoping they take multiple static (WAN) IP addresses. Thank you in advance for your answers. -Jessy Houle

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • IIS 7 doesn't do a 301 redirect

    - by rohde
    Hi there! I have just migrated some sites to IIS7 from IIS6, and am experiencing problems with one of the sites. When a request comes in for the site (www.ourdomain.com/site1/) with a trailing slash everything is fine. But if the trailing slash is left out (www.ourdomain.com/site1), then the request fails. Apparently it doesn't do a 301 redirect on the slashl-less URL, resulting in ASP.NET throwing an exception. This is not happening with the other sites at the same domain. What can be the cause of this? EDIT: The exception I get is this: System.Web.HttpException: Failed to Execute URL. at System.Web.Hosting. ISAPIWorkerRequestInProcForIIS6.BeginExecuteUrl(String url, String method, String childHeaders, Boolean sendHeaders, Boolean addUserIndo, IntPtr token, String name, String authType, Byte[] entity, AsyncCallback cb, Object state) at System.Web.HttpResponse.BeginExecuteUrlForEntireResponse(String pathOverride, NameValueCollection requestHeaders, AsyncCallback cb, Object state) at System.Web.DefaultHttpHandler.BeginProcessRequest(HttpContext context, AsyncCallback callback, Object state) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

    Read the article

  • Webapp in Jetty can't find properties file after running a couple days

    - by Cuga
    I have a webapp running in Jetty on Mac OS 10.6. After a few days of it running and without the server losing power or rebooting, it seems to stop working saying it can't find a properties file. This properties file is included inside the .war file deployed to the /webapps directory. If I restart Jetty as the superuser the web service works again just fine. Can anyone lend any advice to what's going on and how I can fix it? The error being shown when it isn't working is: Problem accessing /my-web-service. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at com.company.service.Dao.readFromPropertiesFile(BwDao.java:35) at com.company.service.ServletHandler.doGet(ProxyClass.java:66) ... at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Here's where the properties files exist that it's trying to read from the .war file: And this is how the properties are being read from the classpath: Properties properties = new Properties(); properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream( "app.properties")); Again, this does work just fine if I have just restarted the server, but it seems to fail after running a few days.

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • append $myorigin to localpart of 'from', append different domain to localpart of incomplete recipient address

    - by PJ P
    We have been having some trouble getting Postfix to behave in a very specific fashion in which sender and recipient addresses with only a localpart (i.e. no @domain) are handled differently. We have a number of applications that use mailx to send messages. We would like to know the username and hostname of the sending party. For example, if root sends an email from db001.company.local, we would like the email to be addressed from [email protected]. This is accomplished by ensuring $myorigin is set to $myhostname. We also want unqualified recipients to have a different domain appended. For example, if a message is sent to 'dbadmin' it should qualify to 'dbadmin@company.com'. However, by the nature of Postfix and $myorigin, an unqualified recipient would instead qualify to [email protected]. We do not want to adjust the aliases on all servers to forward appropriately. (in fact, every possible recipient doesn't have an entry in /etc/passwd) All company employees have mailboxes on Exchange, which Postfix eventually routes to, and no local Linux/Unix mailboxes are used or access. We would love to tell our application owners to ensure they use a fully qualified email address for all recipients, but the powers that be dictate that any negligence must be accommodated. If we were to keep $myorigin equal to $myhostname, we could resolve this issue by having an entry such as the following in 'recipient_canonical_maps': @$myorigin @company.com However, unfortunately, we cannot use variables in these map files. We also want to avoid having to manually enter and maintain the actual hostname in 'recipient_canonical_maps' for each server. Perhaps once our servers are 'puppetized' we can dynamically adjust this file, but we're not there yet. After an afternoon of fiddling I've decided to reach out. Any thoughts? Thanks in advance.

    Read the article

  • 403.4 won't redirect in IE7

    - by Jeremy Morgan
    I have a secured folder that requires SSL. I have set it up in IIS(6) to require SSL. We don't want the visitors to be greeted with the "must be secure connection" error, so I have modified the 403.4 error page to contain the following: function redirectToHttps() { var httpURL = window.location.hostname+window.location.pathname; var httpsURL = "https://" + httpURL ; window.location = httpsURL ; } redirectToHttps(); And this solution works great for every browser, but IE7. On any other browser, if you type in http://www.mysite.com/securedfolder it will automatically redirect you to https://www.mysite.com/securedfolder with no message or anything (the intended action). But in Internet Explorer 7 ONLY it will bring up a page that says The website declined to show this webpage Most Likely Causes: This website requires you to log in This is something we don't want of course. I have verified that javascript is enabled, and the security settings have no effect, even when I set them to the lowest level I get the same error. I'm wondering, has anyone else seen this before?

    Read the article

  • Postfix (delivery temporarily suspended: conversation with mydomain.net [private/lmtp] timed out while receiving the initial server greeting)

    - by Paul
    I'm running Debian 7.1, Postfix version 2.9.6, Dovecot Version 2.1.7 To set it up I followed mostly this (without the spamass-clamav-greylist bit) I have also got setup smart host relaying via gmail postconf -n reveals: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all inet_protocols = ipv4 mailbox_size_limit = 0 milter_default_action = accept mydestination = MyDomain, localhost.net, localhost myhostname = MyDomain.net mynetworks = 127.0.0.0/8 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relay_domains = mysql:/etc/postfix/mysql_relay_domains.cf relayhost = [smtp.gmail.com]:587 smtp_connect_timeout = 120s smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/relay_passwd smtp_sasl_security_options = noanonymous smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unauth_destination, reject_unauth_pipelining, reject_invalid_hostname smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf virtual_gid_maps = static:3000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_mailbox_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf virtual_transport = lmtp:unix:private/lmtp virtual_uid_maps = static:3000 I am able to send emails to the outside world but all emails sent to me are getting stuck. mailq is showing numerous lines: A69C2414C4 2621 Fri Dec 27 14:57:03 [email protected] (conversation with MyDomain.net[private/lmtp] timed out while receiving the initial server greeting) [email protected] AB78B414BE 3796 Fri Dec 27 14:56:50 name2@gmail.com (delivery temporarily suspended: conversation with MyDomain.net[private/lmtp] timed out while receiving the initial server greeting) [email protected] /var/log/mail.log is showing: Dec 28 09:50:09 hostname postfix/lmtp[10828]: E947C414CD: to=, relay=localhost[private/lmtp], delay=64012, delays=63712/0.25/300/0, dsn=4.4.2, status=deferred (conversation with localhost[private/lmtp] timed out while receiving the initial server greeting) Any help would be greatly appreciated. Thank you

    Read the article

  • Whats the best way to update Ubuntu 9.04?

    - by Fu86
    I have a Ubuntu 9.04 server which has no packase support anymore. If I want to update my package lists, I get th following errors: Err http://de.archive.ubuntu.com jaunty-security/multiverse Packages 404 Not Found [IP: 141.30.13.10 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/jaunty/main/binary-amd64/Packages 404 Not Found [IP: 141.30.13.10 80] .... I read at the official Ubuntu-Support-Page, that there is a update-manager-core-Package to upgrade to a new release. Unfortunately I dont have this package installed and I am unable to install it because of the lack of package sources. EDIT: Installing the package update-manager-core from another release doesn't work because it depends on a higher version of python-apt. (Tried with 10.04) $ dpkg -i update-manager-core_0.134.7_amd64.deb Selecting previously deselected package update-manager-core. (Reading database ... 28743 files and directories currently installed.) Unpacking update-manager-core (from update-manager-core_0.134.7_amd64.deb) ... dpkg: dependency problems prevent configuration of update-manager-core: update-manager-core depends on python-apt (>= 0.7.13.4ubuntu3); however: Version of python-apt on system is 0.7.9~exp2ubuntu10. update-manager-core depends on python-gnupginterface; however: Package python-gnupginterface is not installed. dpkg: error processing update-manager-core (--install): dependency problems - leaving unconfigured Errors were encountered while processing: update-manager-core So, whats the best way to upgrade to to current Release without reinstalling the complete (virtual) server?

    Read the article

  • Tomcat Solr times out

    - by user568458
    (Plesk 10.4 centos 5.8 linux apache2 server, with Tomcat5 on port 8080 and Apache Solr) I get "The connection has timed out" on requesting domain.com:8080 or www.domain.com:8080 or ip.ad.dr.ess:8080 Every reason I can find why this might be seems not to be the case: Plesk thinks Tomcat is running fine and lists it as an active service. The firewall currently has an accept all rule on port 8080. There's nothing relevant in the catalina tomcat logs (/var/log/tomcat5) - just some stuff from last time tomcat was started. There's no record at all of the requests that fail. netstat -lnp | grep 8080 gives the following, which I beleive means Tomcat is listening to requests to port 8080 on all ip addresses from any ip and any port (please correct me if I'm wrong): : tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4018/java This covers every cause of this time out that I can find - so I must be missing something fundamental. It seems Tomcat is running, listening to the right port, is getting an appropriate IP address, is not obstructed by a firewall and is not failing after receiving a request in a way which would be recorded in the logs (so I believe it can't be out of memory, or anything like that). I'm all out of ideas on how to continue debugging this. I must have overlooked something obvious. Can anyone help?

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "logstash.alert@nowhere.com" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "you@example.com" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • Facing application redirection issue on nginx+tomcat

    - by Sunny Thakur
    I am facing a strange issue on application which is deployed on tomcat and nginx is using in front of tomcat to access the application from browser. The issue is, i deployed the application on tomcat and now setup the virtual host on nginx under conf.d directory [File i created is virtual.conf] and below is the content i am using for the same. server { listen 81; server_name domain.com; error_log /var/log/nginx/domain-admin-error.log; location / { proxy_pass http://localhost:100; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } Now the issue is this when i am using rewrite ^(.*) http://$server_name$1 permanent; in server section and access the URL then this redirects to https://domain.com and i am able to log in to app and able to access the links also [I am not using ssl redirection in this host file and i don't know why this is happening] Now when i removed this from server section then i am able to access the application from :81 and able to logged into the application but when i click on any link in app this redirect me to the login page. I am not getting any logs in application logs as well as tomcat logs. Please help on this if this is a redirection issue of nginx. Thanks, Sunny

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • IIS 8 URL Redirect on site level

    - by jackncoke
    I am trying to do a simple 301 perm redirect to another url in IIS 8. The end results would be if i navigated to domain2.com i would end up on domain1.com. We are moving from IIS 6 to a new server and have aprox 600+ sites that will be configured on this IIS 8 box. All of these sites run a property CMS and are looking at the same directory for source code. In IIS 6 i would just go to the Home directory tab of each site and check the box that says "Permanent Redirect" and provide a URL. With IIS 8 there is "HTTP Redirect" and this looks like it would do the trick but it is being applied to all the sites in IIS 8. Not on the site level like it use to be in IIS 6. I also looked into URL Rewriting module for IIS 8 but it seems to take rules in the style of a firewall and i am not sure if i could effectly create rules that would cater to 600+ sites. I am looking for the easiest way to have redirects on my site level so that that customers with multiple domains can have there sites redirect to there main domain for seo purposes. I feel like this was so easily achieved in IIS 6 that i must be overlooking something in the new version.

    Read the article

  • nginx and proxy_hide_header

    - by giskard
    When I curl for a URL I get this answer back: > < HTTP/1.1 200 OK < Server: nginx/0.7.65 < Date: Thu, 04 Mar 2010 12:18:27 GMT < Content-Type: application/json < Connection: close < Expires: Thu, 04 Mar 2010 12:18:27 UTC < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 < Transfer-Encoding: chunked I want to remove < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 So I used the proxy_hide_header directive in this way: location / { if ($arg_id) { proxy_pass http..authorized; break; } proxy_pass http..anonymous; proxy_hide_header http.context.path; proxy_hide_header jersey.response; proxy_hide_header http.request.path; proxy_hide_header http.status ; } But it doesn't work. any clues?

    Read the article

  • Sudoers file allow sudo on specific file for active directory group

    - by tubaguy50035
    I have active directory sign in working on an Ubuntu 12.04 box. When the user signs in, I have a script that runs that needs sudo permission (since it modifies the samba config file). How would I specify this in my sudoer's file? I've tried: %DOMAIN\\AD+Programmers ALL=NOPASSWD: /usr/local/bin/createSambaShare.php I've found various resources on the internet stating that this is how it would be done, but I'm not sure that I have the first part right. What are they using as the DOMAIN? The workgroup or the realm? I use Samba + winbind for active directory integration. Here's my smb.conf: [global] security = ads netbios name = hostname realm = COMPANYNAME.COM password server = passwordserver workgroup = COMPANYNAME idmap uid = 1000-10000 idmap gid = 1000-10000 winbind separator = + winbind enum users = no winbind enum groups = no winbind use default domain = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes domain master = no EDIT: The users that should have access to run that script are all part of the Programmers group which has an Active Directory Domain Services Folder of Company.com/Staff/Security Groups (not sure if that matters or not).

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • EC2 instance is blocking all outbound connections, how to diagnose/fix?

    - by Fraggle
    My EC2 instance is blocking all outbound connections. wget http://www.google.com ==> Hangs ping google.com ==>hangs ssh user@anyserver ==>hangs I ran : sudo iptables -F to eliminate all rules to no avail AWS Management console shows Security Group for that instance has Inbound rule allowing ssh and port 80. Can't find anything about Outbound rules there. Rebooted instance, no change. If anyone knows how to diagnose or fix, please help. Adding info: [ec2-user@ip-10-112-62-73 ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 12:31:3D:06:31:BB inet addr:10.112.62.73 Bcast:10.112.63.255 Mask:255.255.254.0 inet6 addr: fe80::1031:3dff:fe06:31bb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1933 errors:0 dropped:0 overruns:0 frame:0 TX packets:1764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:164075 (160.2 KiB) TX bytes:343256 (335.2 KiB) Interrupt:9 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:672 (672.0 b) TX bytes:672 (672.0 b) [ec2-user@ip-10-112-62-73 ~]$ ip route show 10.112.62.0/23 dev eth0 proto kernel scope link src 10.112.62.73 default via 10.112.62.1 dev eth0

    Read the article

  • IP-dependent local port-forwarding on Linux

    - by chronos
    I have configured my server's sshd to listen on a non-standard port 42. However, at work I am behind a firewall/proxy, which only allow outgoing connections to ports 21, 22, 80 and 443. Consequently, I cannot ssh to my server from work, which is bad. I do not want to return sshd to port 22. The idea is this: on my server, locally forward port 22 to port 42 if source IP is matching the external IP of my work's network. For clarity, let us assume that my server's IP is 169.1.1.1 (on eth1), and my work external IP is 169.250.250.250. For all IPs different from 169.250.250.250, my server should respond with an expected 'connection refused', as it does for a non-listening port. I'm very new to iptables. I have briefly looked through the long iptables manual and these related / relevant questions: http://serverfault.com/questions/57872/iptables-question-forwarding-port-x-to-an-ssh-port-of-different-machine-on-the-n http://serverfault.com/questions/140622/how-can-i-port-forward-with-iptables However, those questions deal with more complicated several-host scenarios, and it is not clear to me which tables and chains I should use for local port-forwarding, and if I should have 2 rules (for "question" and "answer" packets), or only 1 rule for "question" packets. So far I have only enabled forwarding via sysctl. I will start testing solutions tomorrow, and will appreciate pointers or maybe case-specific examples for implementing my simple scenario. Is the draft solution below correct? iptables -A INPUT [-m state] [-i eth1] --source 169.250.250.250 -p tcp --destination 169.1.1.1:42 --dport 22 --state NEW,ESTABLISHED,RELATED -j ACCEPT Should I use the mangle table instead of filter? And/or FORWARD chain instead of INPUT?

    Read the article

  • IP Blacklists and suspicious inbound and outbound traffic

    - by Pantelis Sopasakis
    I administer a web server and recently we had our IP banned (!) from our host after they received a notification e-mail for abuse. In particular our server is allegedly involved in spam attacks over HTTP. The content of the abuse report email we received was not much informative - for example the IP addresses our server is supposed to have attacked against are not included - so I started a wireshark session checking for suspicious traffic over TCP/HTTP while trying to locate possible security holes on the system. (Let me note that the machine runs a Debian OS). Here is an example of such a request... Source: 89.74.188.233 Destination: 12.34.56.78 // my ip Protocol: HTTP Info: GET 'http://www.media.apniworld.com/image.php?type=hv' HTTP/1.0 I manually blacklisted this host (as well as some other ones) blocking them with iptables, but I can't keep on doing manually all day long... I'm looking for an automated way to block such IPs based on: Statistical analysis, pattern recognition or other AI-based analysis (Though, I'm reluctant to trust such a solution, if exists) Public blacklists Using DNSBL I actually found out that 89.74.188.233 is blacklisted. However other IPs which are strongly suspicious like 93.199.112.126 (i.e. http://www.pornstarnetwork.com/account/signin), unfortunately were not blacklisted! What I would like to do is to automatically connect my firewall with DNSBL (or some other blacklist database) and block all traffic towards blacklisted IPs or somehow have my local blacklist automatically updated.

    Read the article

  • What is the best way to create a failover cluster for my IIS website?

    - by ObligatoryMoniker
    Our eCommerce website www.tervis.com currently runs on two servers: SQL server: 2005 x 86 on Windows Server 2003 Standard x86 with a single dual core processor and 4 gb of memeory IIS server: Windows Server 2008 Web edition x64 with dual quad core hyper threaded processors and 32 gb of memory Tervis.com's revenue has steadily grown to the point where we need to have redundant servers deployed with a fail over mechanism so that we do not have any down time. Because the SQL server is so underpowered compared to the web server my thought was to purchase: 2 x SQL Server 2008 R2 web edition x64 single processor license 2 x Windows Server 2008 R2 Web Edition Licenses 1 x New Physical dual quad core 32 GB server 1 x F5 Load Balancer I need the Windows Server 2008 R2 Web Edition licenses so that I can run SQL and IIS on the same box for both of these servers. The thought is to run this as an active/passive fail over cluster that could be upgraded to an active/active cluster if we purchased the additional SQL licensing. The F5 load balancer would serve as the device that monitors the two servers and if the current active one stops responding then fails over to using the other server. To be clear this is not windows clustering but simply using a load balancer to fail over between two computers so that you now have a cluster in the general sense. Is this really the best way to accomplish what I need? Is there some way to leverage the old server 2003 SQL server to function as the devices that funnels http requests to the appropriate active server and then fails over if a problem occurs? Is there any third party clustering software that might help me accomplish this in a simpler fashion?

    Read the article

< Previous Page | 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120  | Next Page >