Search Results

Search found 15403 results on 617 pages for 'request querystring'.

Page 193/617 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • nginx + Jetty - thousands of connections stuck in LAST_ACK

    - by virulence
    I have a FreeBSD machine with jails -- two in particular, one that runs nginx and another that runs a Java program that accepts requests via Jetty (embedded mode) Jetty receives upwards of 500 requests/sec constantly and there has been an issue lately where I will constantly have over 60,000 connections in the LAST_ACK state between nginx and jetty. Distribution of all connections (includes some other services, particularly php-fpm) root@host:/root # netstat -an > conns.txt root@host:/root # cat conns.txt | awk '{print $6}' | sort | uniq -c | sort -n 18 LISTEN 112 CLOSING 485 ESTABLISHED 650 FIN_WAIT_2 1425 FIN_WAIT_1 3301 TIME_WAIT 64215 LAST_ACK Distribution of nginx - jetty connections root@host:/root # cat conns.txt | grep '10.10.1.57' | awk '{print $6}' | sort | uniq -c | sort -n 1 3 CLOSE_WAIT 3 LISTEN 18 FIN_WAIT_2 125 ESTABLISHED 64193 LAST_ACK I'd prefer every request to fully close the connection. Clients requests are about 10 minutes apart from each other so connections must be closed. Some of the connections, tcp4 0 0 10.10.1.50.46809 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46805 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46797 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46794 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46790 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46789 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46771 10.10.1.57.9050 LAST_ACK etc.. On Jetty's end I've set maxIdleTime to 2000 -- before this all connections were in ESTABLISHED but they are now LAST_ACK On Jetty's end I've set Connection: close (i.e response.setHeader(HttpHeaders.CONNECTION, HttpHeaderValues.CLOSE);) Jetty never reports a lot of open connections -- always very few. PF/IPFW is not currently being used nginx - reset_timedout_connection is on I cannot figure out how to get nginx or jetty to forcibly close the connection, is this simply something that needs to be fixed in Jetty so that it fully closes the socket after the request finishes? Thanks a lot in advance EDIT: forgot my nginx config for the proxy setup- proxy_pass http://10.10.1.57:9050; proxy_set_header HTTP_X_GEOIP $http_x_geoip; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header Connection ""; proxy_http_version 1.1; EDIT2: Forcing Jetty to close the connection via request.getConnection().getEndPoint().close() does nothing -- it's obvious the connection IS being closed (as it's in LAST_ACK) but why isn't it getting past this? Is Nginx keeping the connection open to the backend for some reason?

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • Windows Advanced Firewall certificate based IPSEC

    - by Tim Brigham
    I'm working on migrating from using IPSEC settings stored under the 'IP Security Policies on Active Directory' to using the 'Windows Firewall with Advanced Security' for my 2008+ boxes. I have successfully been able to get this set up using Kerberos authentication, however my openswan implementation on my Linux boxes is using certificates. Whenever I try changing the authentication method to computer certificate (using RSA and my root CA) the connection is bombing out. I've made this change at both a connection request policy and on the IPSEC settings on the root Windows Firewall with Advanced Security node. The windows event log shows the authentication request is taking place but failing negotiating a mode. What am I missing here?

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • Bridging two sockets

    - by Itehnological
    I wondered if it is possible to bridge two incoming tcp sockets. For example: Client A -----> Server <----- Client B The the server sends it's magic to both clients and then they connect to each other bypassing the server Server Client A ----------><---------- Client B UPDATE: The idea is when those clients can't bind to ports to listen to still be able to create connection between each other with the help of the server. For example Client A and Client B have tcp sockets with the server. User A decides to chat with User B and creates a new tcp connection with the server with the request to bridge it with User B. The server sends that request to Client B and it also opens up a new tcp connection with the server for that chat line. Now when the server has both chat connections from A and B it bridges them and they can work without the server, and as a result the server won't have to process all the messages and files the two users share. That's the idea/

    Read the article

  • Hyperic HQ metrics not working

    - by Robin Weston
    I am having a problem with Hyperic HQ. Several metrics, some in IIS 6.x (Request Execution Time, Request Wait Time) and some in .NET 2.0 (Bytes in all Heaps, Exceptions Thrown per Minute), always show 0. If I view perfmon on the server itself I can see that the counters have values greater than zero. There are some metrics that work fine, such as Total Get Requests per Minute and the other IIS defaults. I have looked in the server logs but nothing obvious shows up. Please advise. Am happy to find more information if required.

    Read the article

  • HTTP Caching Server that supports POST

    - by Jeroen
    I am hosting a REST service which is sending appropriate cache-control headers. I use Varnish as a caching server in front of my webserver. However, a limitation of varnish is that it doesn't support caching HTTP POST and HTTP PUT. Is there any alternate caching server that will be able to cache these requests? I understand that caching POST is a bit tricky because you cannot just cache based on the url as a key like for GET; it needs to actually inspect the request body. In case of multipart/form-data requests, there should probably be a limit on the size of the request body for it to be cached (so that big file uploads, etc won't be cached). Nevertheless I really want to be able to cache short HTTP POST, or at least the application/x-www-form-urlencoded ones.

    Read the article

  • Which built-in account is used for Anonymous Authentication in IIS 7?

    - by smwikipedia
    We know that Microsoft IIS 7.0 offer a slew of authentication methods such as Anonymous Authentication, Form Based Authentication, Digest Authentication, etc. I read from Professional IIS 7 published by Wrox that: When we use Anonymous Authentication, the end-user does not supply credentials, effectively mak- ing an anonymous request. IIS 7.0 impersonates a fixed user account when attempting to process the request (for example, to read the file off the hard disk). So, what is the fixed user account impersonated by IIS? Where can I see it? If I don't know what this account is, how could I assign proper permissions for the clients who are authenticated as anonymous users? Thanks.

    Read the article

  • Tomcat access logs - are failed requests included?

    - by Maxim Eliseev
    We have a RESTful web service (Java, hosted in Tomcat on Ubuntu on Amazon EC2). From time to time it fails (not every week). When it fails, Java CPU consumption goes to 100% and it takes all available memory. It does not finish by itself. I have to restart the server. There is nothing suspicious in Tomcat access logs. I guess one of our users could submit a very "heavy" request which brought the server down. Is it possible this request is not in Tomcat logs since it never finished?

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

  • Why is rsync.exe [cwRsync] trying to open a port when in client mode?

    - by hemancuso
    I'm trying to use a cygwin compiled version of rsync [the cwrsync package] on Windows and in seemingly whatever configuration I test in there is a request to the user presented by Windows Firewall to allow inbound traffic. If you deny this request, everything works fine - as expected. I'm doing a vanilla push rsync.exe localpath user@remotepath:/absolutepath and it works just fine. I've also attempted this command having deleted ssh from the path and using rsync on local paths - still a firewall prompt. Why is this listen() happening and is there a way I can force the client to not attempt to listen without recompiling and maintaing a patch?

    Read the article

  • nginx automatic failover load balancing

    - by robinmag
    Hi, I'm using nginx and NginxHttpUpstreamModule for loadbalancing. My config is very simple: upstream lb { server 127.0.0.1:8081; server 127.0.0.1:8082; } server { listen 89; server_name localhost; location / { proxy_pass http://lb; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But with this config, when one of 2 backend server is down, nginx still routes request to it and it results in timeout half of the time :( Is there any solution to make nginx to automatically route the request to another server when it detects a downed server. Thank you.

    Read the article

  • TCP > COM1 for receiving messages and displaying on POS display pole

    - by JakeTheSnake
    I currently have a Java Applet running on my web page that communicates to a display pole via COM1. However since the Java update I can no longer run self-signed Java Applets and I figure it would just be easier to send an AJAX request back to the server and have the server send a response to a TCP port on the computer...the computer would need a TCP COM virtual adapter. How do I install a virtual adapter to go from a TCP port to COM1? I've looked into com0com and that is just confusing as hell to me, and I don't see how to connect any ports to COM1. I've tried tcp2com but it doesn't seem to install the service in Windows 7 x64. I've tried com2tcp and the interface seems like it WOULD work (I haven't tested), but I don't want an app running on the desktop...it needs to be a service that runs in the background. So to summarize how it would work: Web page on comp1 sends AJAX request to server Server sends text response to comp1 on port 999 comp1 has virtual COM port listening on port 999, sends data to COM1 pole displays data

    Read the article

  • How to configure Apache for PHP on Windows 7?

    - by salim cobra
    I have installed Apache 2.0(httpd-2.0.64-win32-x86-no_ssl) and it works, then I have installed Php5.3 and it pointed to Apache configuration folder. failed scenario: 1- create simple test.php, put in under C:\Apache\Apache2\htdocs 2- call "http://localhost:8080/test.php" -- "Bad Request..Your browser sent a request that this server could not understand." Proposed solution by NetBeans blog (failed) 1-add those two lines to httpd.conf AddType Application/x-httpd-php .php LoadModule php5_module "c:/php/sapi/php5apache2_2.dll" It doesnt works because there is no "php5apache2_2.dll" under my Php installation folder?? I have such .dll : php5ts.dll, ssleay32.dll,.. Any one have any suggestion in order to run the PHP script successfully?

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • Apache Web Server character encoding

    - by OBY
    I've recently transferred my webapp from my localhost (LH) to a VPS, and have had hebrew chars-encoding probs since. Whenever I send a request with a heb-char it results in "?????" saved to the DB. My LH config was tomcat6, MySQL, and centOS 6.2, opened to the web. In the VPS env I'm behind an Apache Web Server, and the rest is quite the same (though I haven't done anything to its installation). Please note I already have had this problem before, on my LH when the request was sent from IE/chrome (not FF!). The solution was to apply a filter on the the context and change the char-type to UTF-8. My webapp content char-encode is utf-8, MySql server set to utf8 using charset utf8;, and my centOS set to iw_IL.UTF8 using export LANG=iw_IL.UTF8. When I use locale the bash output seems to be set correctly. Any suggestions?

    Read the article

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • Log incoming requests on Ubuntu (ports 80, 443)

    - by Maxim Eliseev
    We have Tomcat running on Ubuntu server. It runs a web service, open to the internet. Sometimes it has a sudden spike of traffic and goes down. There is nothing unusual in Tomcat access logs. I guess it is because some of the requests are so 'heavy' that they never finish and hence are not recorded to Tomcat access logs. Is there a way to configure Ubuntu to log incoming requests in the following format (below)? Date, Time, URL (with query string params), IP address (of client) There should be one line per request. Each request should be logged before it is executed. Only incoming requests to ports 80 and 443 should be logged.

    Read the article

  • Does Apache 2.2 (windows) have any default bandwidth limit?

    - by igino manfre'
    I'm running Apache on a server in cloud (Windows server 2008 R2 on VMware, 1 Gbps of BW, http://95.110.164.61 ). I'm streaming many live DVB MPEG Transport Stream, precompressed in loop, (not flash) generated by VLC on port 640xx and then reverse proxied by Apache on port 80. The server's firewall is open for VLC and Apache on all ports. Above 1.5 Mbps the reproduction is affected by continous stop & go. Please note that if you request a stream generated by VLC directly at http://95.110.164.61:64087/mpg2_6.4 you see a correct stream, while if you request http://95.110.164.61/mpg2_6.4 you do not. I know that Flash streaming Server uses Apache to stream on port 80 (and it works). I'm not an expert with Apache, can anyone tell me if any "special" module is required to increase the bandwidth?

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Ping myself, works with ipv6 not ipv4 in Windows 7

    - by user68546
    Hi! I've tried to solve the following problem with no luck and I need some proffesional help. The following is possible: Ping all computers (that I tried) in the domain without problem. Ping myself with localhost which use ::1. Ping myself with my given ipv6 IP. Internet access. The following is not possible: Noone can ping me (request timeout) with computername/ipv4/ipv6. I cannot ping myself with my given ipv4 IP or 127.0.0.1 (request timeout). Tried to enable/disable TCP/IPv4. Same issue. Turned off windows firewall. Added an inbound rule to allow icmp (just in case). Same same.. Is there someone out there that has any idea what the issue could be? Any help would be most appreciated!

    Read the article

  • ArchBeat Link-o-Rama for December 12, 2012

    - by Bob Rhubart
    “Cloud Integration in Minutes” – True or False? | Bruce Tierney The answer is 'True, but..." according to Bruce Tierney. "Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points." Get the rest of the story in Bruce's detailed post. Tech World Discovers New Species: The Cloud Architect | Wired Enterprise | Wired.com This Wired article by Cade Metz boils down to one essential conclusion: Cloud computing is a significant departure from "data center designs of the past," and the demand for the specialized skills of the cloud architect will only increase. But you already knew that, right? Oracle B2B - Synchronous Request Reply | A-Team - SOA "Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet," says C. D. Wright of the Fusion Middleware A-Team. His post includes a demo and everything you need to run it. Thought for the Day "Don't worry about what anybody else is going to do… The best way to predict the future is to invent it." — Alan Kay (Month Day, Year - Month Day, Year) Source: SoftwareQuotes.com

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Proxy HTTPS requests to a HTTP backend with NGINX.

    - by Mike
    I have nginx configured to be my externally visible webserver which talks to a backend over HTTP. The scenario I want to achieve is: Client makes HTTPS request to nginx nginx proxies request over HTTP to the backend nginx receives response from backend over HTTP. nginx passes this back to the client over HTTPS My current config (where backend is configured correctly) is: server { listen 80; server_name localhost; location ~ .* { proxy_pass http://backend; proxy_redirect http://backend https://$host; proxy_set_header Host $host; } } My problem is the response to the client (step 4) is sent over HTTP not HTTPS. Any ideas?

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >