Search Results

Search found 14226 results on 570 pages for 'feature requests'.

Page 393/570 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • What can I do with a home server?

    - by Joel Coehoorn
    I have an old 700 Mhz Pentium III at home running Windows 2000 Server, with a home router set up to pass incoming requests to it and a DynDNS account set up so it's easy to find. Right now I'm using it for a number of things: Shared folders + backup inside the home network Shared Printer inside the home network Domain Controller, just because I feel like it and because it's useful to me as practice to keep those "enterprise" administration skills. Web Server FTP remote access for my files. I abandoned this for security reasons, but it's still worth leaving visible. Remote Desktop in to the home network (thinking about adding VPN service) SVN repository MySQL - Will be moving to SQL Server 2008 Standard soon. After I upgrade my wife's laptop from home to pro later this year it will also become a domain controller It's the only place I still have access to Internet Explorer 6 any more without setting up a new virtual machine, so I use it for testing code with that browser. The question is: What else could I be doing with this machine? Update Additional ideas based on the suggestions: Media Server/DVR Build server PBX SSH Proxy Server Continuous Integration Server Personal OpenID Provider Update2 Just a note that this server was recently upgraded to an Atom330 with 2 GB ram and bigger hard drive. For all that's slow for a "modern" cpu, it should still be much faster than the old Pentium III and the expected power savings should make the upgrade essentially free over the course of the next year or two. Also, it's now running Windows Server 2008.

    Read the article

  • Apache + Tomcat: Which one should handle SSL? IP-based proxy forwarding?

    - by delirial
    We currently have a Tomcat application running with SSL on port 443. Right now we have an apache server that accepts http requests on port 80 and redirects to the Tomcat instance: <VirtualHost *:80> ServerName domain.com ServerAlias domain.com <LocationMatch "/"> Redirect permanent / https://domain.com/ </LocationMatch> </VirtualHost> Tomcat is handling SSL, because there's no proxy, just a simple redirect to the SSL port: <Connector port="443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="/app/ssl/domain_com.jks" keystorePass="ourpassword" clientAuth="false" sslProtocol="TLS"/> We want to begin using the apache web server as a proxy and additionally, do per-IP redirects to certain apps that should only be used by hosts on a pre-determined IP range. We would also like to redirect IPs that don't match the pre-determined list to a static html page hosted on the apache server. My first question is: Should I continue to handle SSL on Tomcat's end, or should I use apache with SSL while forwarding to an "unprotected" tomcat port? Is there any way to redirect to different apps (and potentially hosts) depending on the incoming IP? thanks, del

    Read the article

  • EC2 Auto-Scaling with Spot and On-Demand Instances?

    - by platforms
    I'm looking to optimize the cost of our auto-scaling EC2 groups by having them launch spot instances instead of on-demand instances. What I really want is to be able to keep some servers in the group as on-demand instances, regardless of what happens to the spot instance pricing market. Then I want any additional servers in the group, above my configured minimum, to be spot instances. I'm generally OK with the delay in adding servers via spot requests. I can't seem to find any way to do this and I've tried to scour the AWS documentation. It appears that an ASG can either be on-demand or spot, but not a hybrid. I could possibly manually add an on-demand instance to the Elastic Load Balancer assigned to the auto-scaling group, but then the load of that server would not be factored into the auto-scaling measurements and triggers. I suppose I could enter a ridiculously high bid price in order to ensure that I always get the servers I need, but then I look at the pricing history and see occasional large spikes. The AWS documentation is at odds with itself, since in one place it says that if you enter a server minimum, that number is "ensured" to be there. But then when you read about spot instances, there are no assurances. The price differential for spot is compelling, so I'd like to leverage that as much as I can while still maintaining an always-on baseline. Is this possible?

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

  • Best way to run site through https on server which can't add additional certs

    - by penguin
    So I'm in a curious situation in that I am using a particular server to host things, which I can't host anywhere else (it has access to user databases etc which can't otherwise be accessed). I've been in quite a bit of discussion with the sysadmin at it looks like the only way to run our site: www.foo.com over https may be through some sort of proxy. Currently, users go to www.foo.com and are redirected to https:// host-server.com/foo, as there is an SSL cert installed on that. I want users to be on https:// www.foo.com. I'm told that for various reasons it's going to be very difficult to add an additional SSL cert to the host server. So I was wondering if it is possible to have the DNS records point to a new server, which then creates the HTTPS connection with the browser. Then it forwards requests to https:// host-server.com/foo and feeds the replies back to the original requester. Does this make sense? And would it be at all feasible? My experience with SSL is limited at best, so thanks in advance for your help :) ps gaps in hyperlinks as ServerFault was getting unhappy with the number of links I was posting!

    Read the article

  • Why isn't Apache Basic authentication working?

    - by Brad
    I just upgraded Apache from it's 2003 build, to a squeaky-clean, brand-new 2.4.1 build. All seems pretty good except for one glaring thing: In my httpd.conf file I have the following: <Directory /> AllowOverride none Options FollowSymLinks AuthType Basic AuthName "Enter Password" AuthUserFile /var/www/.htpasswd Require valid-user </Directory> This should allow only users in the specified auth file to access the server - just as it had under the older version of Apache. (Right?) However, it's not working. Requests are granted with no authentication provided. When I switch logging to LogLevel Debug, for the accesses, it says: [Sat Mar 24 21:32:00.585139 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of Require all granted: granted [Sat Mar 24 21:32:00.585446 2012] [authz_core:debug] [pid 10733:tid 32771] mod_authz_core.c(783): [client 192.168.1.181:57677] AH01626: authorization result of <RequireAny>: granted I really don't know what this means - and I (to the best of my knowledge) don't have any "Require all granted" or "" statements in any of my files. Any ideas why this isn't working, or where to debug??

    Read the article

  • virtual web folder served by PHP script

    - by Martin
    I am trying to configure my apache to be able to display (virtual) pages like: mywebpage.com/something1 mywebpage.com/something2 mywebpage.com/folder/something3 I would like these "somethingX" and "folder" folders to be only virtual, not physical directories. For a start it would be great to send all requests to mywebpage to one PHP script which will somehow receive the original path information (there is some SERVER array as far as I know) and call necessary PHP functions (so far I use addresses like mywebpage.com/index.php?page=blabla&otherparameters=values...). Is that possible? I am struggling with different combination, currently I am with following file in /etc/apache2/conf.d/something.conf (not working of course). What is the correct way to proceed? Thanks. <Location /myweb> SetHandler my-handler Action my-handler /srv/www/htdocs/myweb/product.php virtual </Location> My pages are in /srv/www/htdocs/myweb. I tried with Location, with Directory, with Action and SetHandler, with AddHandler... ;-) Some configurations were ignored, some caused "object not found" with nothing relevant in error log.

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Choosing gateway router/firewall for small datacenter network [closed]

    - by rvs
    I'm choosing a gateway router/firewall for small internal network for medium-sized web service. Currently there are 5 servers in internal network, up to 50 http(s) requests/second, up to 1000 simultaneous connections, uplink is 100 Mbit. So, network is relatively small and not very busy and we don't like to buy some pricey monster like cisco or jupiper for this site. Instead we'd like to buy two affordable devices (one for spare), which can handle our workload now and some time in future (it might be up to 2x more in 1 year). I had some experience with Sonicwall NSA, but it seems to be too complex for this site (we don't need most of its features) and even too pricey when buying two of them. So, after some research I've come up with following options: Netgear Prosecure UTM Series (probably UTM25) Zyxel ZyWall Series (USG100 or USG200) Sonicwall TZ 210 Is this a good idea? All of the above seems to be more office products, not datacenter ones. Or we should stick with Sonicwall NSA? Does anyone have any hands-on experience with this models? Maybe some other advices? Thanks.

    Read the article

  • TortoiseSVN client slows Explorer to a crawl in Windows XP running in Parallels

    - by Cory Larson
    I thought I'd make my first SuperUser question relatively simple, though it's the kind of question that may not get many responses as I'm not directly involved with the issue. A colleague does his development in Windows XP running in Parallels on his Mac. We've just migrated our VSS repository to SVN, and we've gone with TortoiseSVN as our client of choice with the Ankhsvn plugin for Visual Studio. On his XP instance, after installing TortoiseSVN, browsing through folders using Explorer is extremely slow; about 15 - 30 seconds before the contents of the next folder displays. It's the slowest when opening My Computer. Once he reaches a folder that contains the working content of an SVN project, Explorer behaves quickly again as expected. It seems that TortoiseSVN may be spending a bunch of time searching subfolders for stuff so it can do its icon-overlay thing, but that's just a guess. I've used TortoiseSVN for years on both XP and Vista on far less powerful machines without any issues with Explorer, so I'm attributing the slowness to it being run in a VM, though that may not be the actual issue. So has anyone encountered similar performance issues, and/or know of a fix? Keep in mind that any requests to make changes to his configuration will need to be communicated and thus my response time might be slow. Thanks everyone!

    Read the article

  • F5 BIG_IP persistence iRules applied but not affecting selected member

    - by zoli
    I have a virtual server. I have 2 iRules (see below) assigned to it as resources. From the server log it looks like that the rules are running and they select the correct member from the pool after persisting the session (as far as I can tell based on my log messages), but the requests are ultimately directed to somewhere else. Here's how both rules look like: when HTTP_RESPONSE { set sessionId [HTTP::header X-SessionId] if {$sessionId ne ""} { persist add uie $sessionId 3600 log local0.debug "Session persisted: <$sessionId> to <[persist lookup uie $sessionId]>" } } when HTTP_REQUEST { set sessionId [findstr [HTTP::path] "/session/" 9 /] if {$sessionId ne ""} { persist uie $sessionId set persistValue [persist lookup uie $sessionId] log local0.debug "Found persistence key <$sessionId> : <$persistValue>" } } According to the log messages from the rules, the proper balancer members are selected. Note: the two rules can not conflict, they are looking for different things in the path. Those two things never appear in the same path. Notes about the server: * The default load balancing method is RR. * There is no persistence profile assigned to the virtual server. I'm wondering if this should be adequate to enable the persistence, or alternatively, do I have to combine the 2 rules and create a persistence profile with them for the virtual server? Or is there something else that I have missed?

    Read the article

  • optimize mod_rewrite in htaccess

    - by clarkk
    I got some mod_rewrite conditions in a .htaccess file which I have extended from time to time.. But I don't think its very well written (I'm still quite new to mod_rewrite) Some times requests end up in infinite loops And just now I added SSL to the file.. When requesting https:// I get a 404 error The requested URL /_secure/_secure/ was not found on this server. Somehow it adds an extra _secure to the path? .htacces # set language RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_URI} ^/(da|en)/(.*)(\?%{QUERY_STRING})?$ [NC] RewriteRule ^(.*)$ /%2?%{QUERY_STRING}&set_lang=%1 [L] # put 'www' as subdomain if none is given RewriteCond %{HTTP_HOST} ^([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.%1/$1 [L,R=301] # rewrite subdomain RewriteCond %{HTTP_HOST} ^(admin|files)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_(admin|files)/ [NC] RewriteRule ^(.*)$ /_%1/$1 [L] # redirect to subdomain RewriteCond %{HTTP_HOST} ^www\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^_([^/]+)/ http://$1.%1/ [L,R=301] # start SSL on 'secure' subdomain if not started RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^(secure)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ https://%1.%2/$1 [L,R=301] # rewrite 'secure' subdomain RewriteCond %{HTTP_HOST} ^(demo|secure)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_secure/ [NC] RewriteRule ^(.*)$ /_secure/$1 [L] # rewrite 'api' subdomain RewriteCond %{HTTP_HOST} ^api\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_api/ [NC] RewriteRule ^(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)? /_api/?%{QUERY_STRING}&v=$1&i=$2&k=$3&a=$4&t=$5&f=$6 [L] # redirect non-active subdomain to 'www' RewriteCond %{HTTP_HOST} !^(admin|api|demo|files|secure|www)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.domain.com [L,R=301] # hide file extensions RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !\.php$ [NC] RewriteCond %{REQUEST_URI} ^/([^/]*)/(?:([^/]*)/)?(?:([^/]*)/)?$ [NC] RewriteRule ^(.*)$ /%1.php?%{QUERY_STRING}&subpage=%2&subsection=%3 [L]

    Read the article

  • How can I stop SipVicious ('friendly-scanner') from flooding my SIP server?

    - by a1kmm
    I run an SIP server which listens on UDP port 5060, and needs to accept authenticated requests from the public Internet. The problem is that occasionally it gets picked up by people scanning for SIP servers to exploit, who then sit there all day trying to brute force the server. I use credentials that are long enough that this attack will never feasibly work, but it is annoying because it uses up a lot of bandwidth. I have tried setting up fail2ban to read the Asterisk log and ban IPs that do this with iptables, which stops Asterisk from seeing the incoming SIP REGISTER attempts after 10 failed attempts (which happens in well under a second at the rate of attacks I'm seeing). However, SipVicious derived scripts do not immediately stop sending after getting an ICMP Destination Host Unreachable - they keep hammering the connection with packets. The time until they stop is configurable, but unfortunately it seems that the attackers doing these types of brute force attacks generally set the timeout to be very high (attacks continue at a high rate for hours after fail2ban has stopped them from getting any SIP response back once they have seen initial confirmation of an SIP server). Is there a way to make it stop sending packets at my connection?

    Read the article

  • How to set a static route for an external IP address

    - by HorusKol
    Further to my earlier question about bridging different subnets - I now need to route requests for one particular IP address differently to all other traffic. I have the following routing in my iptables on our router: # Allow established connections, and those !not! coming from the public interface # eth0 = public interface # eth1 = private interface #1 (10.1.1.0/24) # eth2 = private interface #2 (129.2.2.0/25) iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the private interfaces iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT # Allow the two private connections to talk to each other iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT # Masquerade (NAT) iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Don't forward any other traffic from the public to the private iptables -A FORWARD -i eth0 -o eth1 -j REJECT iptables -A FORWARD -i eth0 -o eth2 -j REJECT This configuration means that users will be forwarded through a modem/router with a public address - this is all well and good for most purposes, and in the main it doesn't matter that all computers are hidden behind the one public IP. However, some users need to be able to access a proxy at 192.111.222.111:8080 - and the proxy needs to identify this traffic as coming through a gateway at 129.2.2.126 - it won't respond otherwise. I tried adding a static route on our local gateway with: route add -host 192.111.222.111 gw 129.2.2.126 dev eth2 I can successfully ping 192.111.222.111 from the router. When I trace the route, it lists the 129.2.2.126 gateway, but I just get * on each of the following hops (I think this makes sense since this is just a web-proxy and requires authentication). When I try to ping this address from a host on the 129.2.2.0/25 network it fails. Should I do this in the iptables chain instead? How would I configure this routing?

    Read the article

  • How to create an GUI that communicate with the USB Devices using win32 programming [migrated]

    - by VINAYAK
    I am doing my Project using win 32 programming.I am just learning about win32 programming and able to create an UI.I want to communicate with an USB Device with that UI.SO,How can i go for that?Is there any predefined functions will be there are we need to create the function for communicating with the OS and get the devices List and got the details about them. My purpose is to , 1.Creating an UI that tells about the Basic information about the device(We want to send a control request to the device to get the descriptors). 2.For that first of all i want to communicate with the OS for device attachment.That will lead to get the information about the device and Enumeration takes place and then only i request the device information through descriptors by using standard Requests. 3.And also i want to create the driver for my device.That will also need to achieve for communicating with OS(Windows). So,can anyone help me about this?How can i achieve this or approach this? Note: I am at the entry level now so anyone give response will be in a detailed format like step by step process would be appreciable.

    Read the article

  • Only one domain is not resolving via Windows DNS server at multiple locations, but is at others

    - by Brett G
    I'm having quite a weird issue. Had mail delivery issues to a specific domain. After looking closer, I realized that the DNS for that domain isn't resolving via the in-house Windows 2003 SP2 DNS server. C:\>nslookup foodmix.net Server: DC.DOMAIN.com Address: 10.1.1.1 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to DC.DOMAIN.com timed-out (DC.DOMAIN.com and 10.1.1.1 are generic values to replace the actual ones) Even if I run this nslookup from the DC.DOMAIN.com server, I get the same result. However, all other requests are working as they should. I had a sysadmin friend try this DNS lookup on servers at several companies that he consults for (which are also Windows 2003 AD servers). The weird thing is some of these were having the same exact issue. However using public DNS servers work. I have tried clearing the DNS cache, restarting the server, restarting the services, etc. Nothing has worked. One weird event I noticed in the DNS Server Event Logs that might be related is an event ID of 5504 with the following description: The DNS server encountered an invalid domain name in a packet from 192.33.4.12. The packet will be rejected. The event data contains the DNS packet. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. In the data section below, I can see the following mentioned: ns2.webhostingstar.com Which happens to be the nameserver for the domain in question. Several discussion threads and a MS KB have pointed to disabling EDNS. I have done this via "dnscmd /config /enableednsprobes 0" and it has not fixed the issue.

    Read the article

  • Force request to miss cache but still store the response

    - by Tom Marthenal
    I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data. I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider). The timeline looks like this: 00:00:00: Cache flushed 00:00:00: Spider starts crawling to update cache with new pages 00:05:00: Spider finishes crawling, all pages are updated until 1:15 A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable. What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup). How can I do this?

    Read the article

  • hosting company blocking google bots and crawlers [closed]

    - by Jayapal Chandran
    Hi, I am having a site for the past three years and it is very active for the past two years. Until not the site is working well and also now but not after the hosting company blocked google bots. Many pages appeared in the first page of the google search. After they started blocking i couldn't see my links in the first page instead they appeared after 5 pages or they did not appear at all. Will hosting companies be so stupid that they block and dont mention it to their users. They want to protect themselves by making the websites at stake. I display google ads and not this month i got only half for this 10 days. I have made requests to other hosting companies like blue host and monster host that i wan to transfer my domain by making a condition that the will not block google bots which stops the business indirectly. so any kind of help will be helpful. how can i claim what i lost from the hosting company. what other hosting companies consider the users (by informing the events like changing the IP or blocking google bot.) It was really working hard to bring up my site but these people just crashed down my site in a few days. :-(

    Read the article

  • How to have LiveJournal delegate my OpenID to something else?

    - by T-Boy
    I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the LiveJournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get LiveJournal, when asked by a web site, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. ETA: Doing a little more reading up, and examining the source for my LiveJournal user page, I note that this particular line in the file's <header> area: <link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" /> I suspect that changing this will allow me to forward OpenID requests to whomever I wish, I think; so far so good. Now comes the hard part -- figuring out how to change all of that using LiveJournal's customization options, if that is at all possible (here's hoping I don't need to pay to get that functionality).

    Read the article

  • VPS Memory Exchausted Even With Light Settings

    - by user101570
    Linux noob here. I have a 256MB VPS on Ubuntu 11.04 server and when I run "free -m" the result shows all memory being used (including the second line re: buffers/cache). I found this very strange, considering I only have 5 Apache processes running each chewing up about 20MB each. MYSQL is taking up 30MB. To my knowledge, and according to "top", I have no other memory hogs operating. Settings that may be relevant: PHP memory_limit = 32M MYSQL key_buffer = 16M Prefork MPM Maxclients = 10 So when I reviewed these settings, I naturally thought maxclients was too high, so I tried switching it to 5. Now not only does my memory still show as being 100% used, my website loads much, much slower, despite not getting any traffic aside from mine at the moment. I don't understand this. I thought a single Apache process handles all requests from a client received within the "KeepAliveTimeout" window, which I've set to 2 seconds. With my initial config. of 10 maxclients, my page load times are around .3ms, so a single process should handle that no problem, correct? So next I went to an extreme level of 1 for maxclients. My memory is still at 100% usage and my site loads painfully slow. I'm a noob at a complete loss here. According to the many tutorials I've read on basic server setup, I should be good to go. Help! Please! Edit: total used free shared buffers cached Mem: 256 256 0 0 0 0 -/+ buffers/cache: 256 0 Swap: 0 0 0

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >