Search Results

Search found 34016 results on 1361 pages for 'static content'.

Page 531/1361 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • Debian, 2 NICs load-balancing or agregating with one same gateway

    - by pouney
    Hi, I have one server, with double NICs connected to one switch with the same gateway. Behind the switch we have internet. |Debian| - eth0 - switch - internet - eth1 - same I don't understand how to load-balancing between eth0 and eth1. The inbound/outbound traffic always use eth1. This is the config: # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.248.82 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 allow-hotplug eth1 auto eth1 iface eth1 inet static address 192.168.248.83 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth1 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth0 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth0 Ips aren't real, it's just for the example. Anybody have an idea on correct routing to use eth0 on 192.168.248.82 and eth1 on 192.168.248.83 ? I have many example for multiple gateway but here it's the same. Thanks all. Regards

    Read the article

  • legitimacy of the tasks in the task scheduler

    - by Eyad
    Is there a way to know the source and legitimacy of the tasks in the task scheduler in windows server 2008 and 2003? Can I check if the task was added by Microsoft (ie: from sccm) or by a 3rd party application? For each task in the task scheduler, I want to verify that the task has not been created by a third party application. I only want to allow standards Microsoft Tasks and disable all other non-standards tasks. I have created a PowerShell script that goes through all the xml files in the C:\Windows\System32\Tasks directory and I was able to read all the xml task files successfully but I am stuck on how to validate the tasks. Here is the script for your reference: Function TaskSniper() { #Getting all the fils in the Tasks folder $files = Get-ChildItem "C:\Windows\System32\Tasks" -Recurse | Where-Object {!$_.PSIsContainer}; [Xml] $StandardXmlFile = Get-Content "Edit Me"; foreach($file in $files) { #constructing the file path $path = $file.DirectoryName + "\" + $file.Name #reading the file as an XML doc [Xml] $xmlFile = Get-Content $path #DS SEE: http://social.technet.microsoft.com/Forums/en-US/w7itprogeneral/thread/caa8422f-6397-4510-ba6e-e28f2d2ee0d2/ #(get-authenticodesignature C:\Windows\System32\appidpolicyconverter.exe).status -eq "valid" #Display something $xmlFile.Task.Settings.Hidden } } Thank you

    Read the article

  • Apache reverse proxy with VirtualHost not serving a page

    - by Mr Aleph
    I have an Apache reverse proxy set to move requests to a Tomcat Applet. The config is similar to: <VirtualHost 100.100.100.100:80> ProxyPass /AppName/App http://1.1.1.1/AppName/App ProxyPassReverse /AppName/App http://1.1.1.1/AppName/App </VirtualHost> I also have a page called summary.html that exists on 1.1.1.1 as: http://1.1.1.1/AppName/summary.html When I browse directly to it I have no problem viewing it, however if I try to get there via the reverse proxy I get a blank page. Wireshark shows me a 503, but this one is coming from the Apache reverse proxy (IP 100.100.100.100) and not the Tomcat (IP 1.1.1.1). Should I add http://1.1.1.1/AppName/ to the config? How? I tried it but I get a blank page, however this one shows on the URL bar of the browser the internal IP of the Tomcat, so, no go. Help is appreciated. Thanks. EDIT: This is the dump from Wireshark: GET /AppName/ HTTP/1.1 Host: 100.100.100.100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Cache-Control: max-age=0 Accept-Language: en-us Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 404 Not Found Date: Tue, 30 Jan 2012 09:08:51 GMT Server: Apache Content-Length: 1 Connection: close Content-Type: text/html; charset=iso-8859-1

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • Default IPv6 route on debian squeeze does not come up after boot

    - by Georg Bretschneider
    I have a problem with my default IPv6 route not coming up after boot on a Debian Squeeze system. This is my config (/etc/network/interfaces): # Loopback device: auto lo iface lo inet loopback iface lo inet6 loopback # device: br0 auto br0 iface br0 inet static bridge_ports eth0 bridge_fd 0 address 88.198.62.xx broadcast 88.198.62.63 netmask 255.255.255.224 gateway 88.198.62.33 up route add -net 88.198.62.32 netmask 255.255.255.224 gw 88.198.62.33 br0 iface br0 inet6 static address 2a01:4f8:131:10x::2 netmask 64 gateway 2a01:4f8:131:100::1 up route -A inet6 add 2a01:4f8:131:100::1/59 dev br0 My inet comes up alright, but I have to exec the route command manually after boot to make IPv6 work. Otherwise I can't even reach my gateway. This is the output of ip -6 route show after boot: 2a01:4f8:131:10x::/64 dev br0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 unreachable fe80::/64 dev lo proto kernel metric 256 error -101 mtu 16436 advmss 16376 hoplimit 4294967295 fe80::/64 dev br0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 I already tried it with: up ip -6 route add 2a01:4f8:131:100::1 dev br0 up ip -6 route add default via 2a01:4f8:131:100::1 dev br0 in /etc/network/interfaces, but with the same results. If I execute those commands manually on my shell, everything starts working nicely. And yes, I tried with post-up instead of up, too. Only other changes I made was to activate ip forwarding for IPv6, because I want to run some LXC containers on that system.

    Read the article

  • OpenVZ with bridged interfaces and VLAN

    - by Deimosfr
    Hi, I've got a problem with OpenVZ with bridged VLAN. Here is my configuration: +------+ +-------+ +-----------+ +---------+ br0 |VE101 | | | | OpenBSD |----->| Debian |------->| | | WAN |--->| Router | | OpenVZ | +------+ | | | Firewall |----->| br0 br1 | br1 +------+ +-------+ +-----------+ +---------+------->|VE102 | |br0 | | |VLAN br0.110 +------+ v +---------+ |VE103.110| +---------+ I can't make VLAN work on br0 (br0.110) and I would like to understand why. I don't have any switch so no problem with unmanageable switch. I've configured a VLAN interface on OpenBSD in /etc/hostname.vlan110: inet 192.168.110.254 255.255.255.0 NONE vlan 110 vlandev sis1 And it seems to be working fine. I've also adapted my PF configuration to work with VLAN but I don't see any incoming traffic. On my Debian Lenny, here is my interfaces configuration : # The loopback network interface auto lo iface lo inet loopback # br0 auto br0 iface br0 inet static address 192.168.100.1 netmask 255.255.255.0 gateway 192.168.100.254 network 192.168.100.0 broadcast 192.168.100.255 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off # VLAN 110 auto br0.110 iface br0.110 inet static address 192.168.110.1 netmask 255.255.255.0 network 192.168.110.0 gateway 192.168.110.254 broadcast 192.168.110.255 pre-up vconfig add br0 110 post-down vconfig rem br0.110 It looks OK, but when I start my VE, here is the message: ... Configure veth devices: veth103.0 Adding interface veth103.0 to bridge br0.110 on CT0 for VE103 can't add veth103.0 to bridge br0.110: Operation not supported VE start in progress... So I've got one error here. I've followed this documentation http://wiki.openvz.org/VLAN but it doesn't work. I've certainly missed something but I don't know why. Someone could help me please? Thanks

    Read the article

  • Varnish with multiple sites/boxes

    - by jerhinesmith
    Is it possible for Varnish to redirect traffic to different IPs based on the url? For example, is the following setup feasible (and if so, what would the VCL look like): *.example.com points to Varnish IP address When a request is made to foo.example.com, varnish checks the cache and sends the request to Server1's IP address on a cache miss. When a request is made to bar.example.com, varnish checks the cache and sends the request to Server2's IP address on a cache miss. foo and bar are (for the most part) completely unrelated sites. They use the engine, but have different content and their own distinct database. Since there previously was no penalty for doing so (other than cost) we split them up into two separate boxes so that a ton of traffic to foo won't have a negative impact on visitors browsing around bar. I could set up two instances of varnish and have one serve up foo's static content and the other serve up bar's, but as there doesn't seem to be much overhead to running Varnish, I think (perhaps mistakenly) that it would make more sense to go with one Varnish server that redirects the traffic to the appropriate box on a cache miss.

    Read the article

  • How can I get web pages from sub.a.com using url sub.b.com?

    - by Steven
    I have developed www.mysite.com. This site can be "integrated" into my partners website. What I do is to create partner1.mysite.com and repalce my header and footer with my partners header and footer and replace some CSS styling. This should make it as transaprent as possible for the user, so that they think they are still browsing my partners website. There are two ways I see how I can accomplish this: 1. My partner uses an IFrame to show the content from partner1.mysite.com 2. My partner creates sub domain and points it to my sub domain. Solution 1 is easy, but I'm not sure how search engines likes this, so I will try solution 2. QUESTION Can I use mysite.partner1.com but read content from partner1.mysite.com? I don't want to forward / redirect users to partner1.mysite.com. It's important that the URL is mysite.partner1.com / mysite.partner1.com/some/page. Is this possible? For testing, I have Apache configuration more or less like this: NameVirtualHost 10.0.0.17 <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/mysite/ ServerName mysite.com </VirtualHost> <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/mysite/ ServerName site1.mysite.com </VirtualHost> // Since this is on my localhost, I also configure site1 here <VirtualHost 10.0.0.17> DocumentRoot D:/wamp/www/site1/ ServerName site1.com </VirtualHost> <VirtualHost 10.0.0.17> ServerName mysite.site1.com --> DO SOME SORT OF FORWARDING HERE <-- </VirtualHost>

    Read the article

  • Require a very simple bash-based webserver for logging XML POST [on hold]

    - by Syffys
    As in title, it's for testing purpose and I need it to be extremely light (1 line to 1 single light file). Here is a XML query sample: XML_QUERY=$(cat <<EOF <?xml version='1.0' encoding='UTF-8'?> <Test></Test> EOF ) curl -H "Content-type: text/xml; charset=utf-8" -H "Soapaction: \"\"" -k -d "${XML_QUERY}" http://localhost:8088 Here are some of the tracks I have found so far even if I wasnt able to adapt them to work as I expect: Netcat minimal webserver: Problem is that my nc does not have the -q option, so the connection is closing before delivering the XML content Netcat Only webserver: Same as above Thanks in advance! EDIT: As it's been asked, I'm running Linux Redhat, even if the distro does not really matter and the OS implied since I'm asking a bash-based solution... Also about my topic being on hold: "Instead, describe your situation and the specific problem you're trying to solve" = I though this was exactly what I was doing, but ok I'll reword: My situation: bash environment (which can also include some standard linux tool: netcat, python or whatever) My specific problem: please see title: Require a very simple bash-based webserver for logging XML in HTTP POST for testing purpose

    Read the article

  • Configuring network route between two routers on home network

    - by Paul
    I have a home network - the main router connected to the internet (and has wifi) is a Netopia box. Connected to it is a Linksys router. Everything currently works - I can connect via the wireless network and get to the internet. Machines connected to the Linksys can connect with each other and connect to the internet. Both routers are configured to serve addresses via DHCP (Netopia 192.168.1.1 - 192.168.1.99), Linksys (192.168.0.1 - 192.168.0.100). Here's how they are connected: Internet <-> Netopia w/wifi (192.168.1.254) <-> Linksys (192.168.0.1) I decided I really need to allow wireless connections to also communicate with machines behind the Linksys router. Currently the Linksys is configured to obtain an IP address via DHCP. I thought this would be straightforward. I configured the Linksys to have a static IP address: IP: 192.168.1.100 Mask: 255.255.255.0 GW: 192.168.1.254 Then I configured a static route on the Netopia: Network: 192.168.0.0 Mask: 255.255.255.0 GW: 192.168.1.100 So it should now look like this: Internet <-> Netopia w/wifi (192.168.1.254) <-> (192.168.1.100) Linksys (192.168.0.1) I reset both routers. I cannot ping the Netopia (192.168.1.254) from inside the Linksys network, and if I attempt to ping 192.168.0.1 from a wifi connection I get a "Destination host not available" error. Obviously I'm missing something, but I'm not sure where. Any ideas on what I'm missing?

    Read the article

  • Apache caching with mod_headers mod_expires

    - by Aaron Moodie
    Hi, I'm working on homework for uni and was hoping someone could clarify something for me. I need to set up the following: Configure the response header "Cache-Control" to have a "max-age" value of 7 days since access for all image files Configure the response header "Cache-Control" to have a "max-age" value of 5 days since modification for all static HTML files. Configure the response header "Cache-Control" to have a value of "public" for all static HTML and image files. Configure the response header "Cache-Control" to have a value of "private" for all PHP files. My question is whether it is better to use a FilesMatch, or the mod_expires ExpiresByType to best achieve this? I've so far used the following: <FilesMatch "\.(gif|jpe?g|png)$"> ExpiresDefault "access plus 7 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(html)$"> ExpiresDefault "modification plus 5 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(php)$"> Header set Cache-Control "private" </FilesMatch> Thanks.

    Read the article

  • TOR Proxy / Vidalia "New Identity" button not working

    - by Yisman
    I need to hide my ip from time to time. In Vidalia, I click on "New Identity". Ihen I check http://myip.ozymo.com/ to see if my IP address has changed. But, no, it hasn't. Why is that? And how can this be fixed? I tried waiting till the button gets re-enabled to make sure that its done processing the command, but still the IP address is the same. In Fiddler each request is tracked, so it's not a cached response. It's re-requested, but simply does not change. Fiddler though does show one thing interesting. Here is the raw response of many of the requests: HTTP/1.1 200 OK Content-Length: 13 Date: Mon, 23 May 2011 12:02:57 GMT Server: Apache X-Powered-By: PHP/5.2.14 Content-Type: text/html; charset=UTF-8 Age: 1 Connection: keep-alive **Warning: 110 localhost:8118 Object is stale** 26.32.120.106 What is this warning? And is this the cause?

    Read the article

  • Redirect 301 fails with a path as destination

    - by Martijn Heemels
    I'm using a large number of Redirect 301's which are suddenly failing on a new webserver. We're in pre-production tests on the new webserver, prior to migrating the sites, but some sites are failing with 500 Internal Server Error. The content, both databases and files, are mirrored from the old to the new server, so we can test if all sites work properly. I traced this problem to mod_alias' Redirect statement, which is used from .htaccess to redirect visitors and search engines from old content to new pages. Apparently the Apache server requires the destination to be a full url, including protocol and hostname. Redirect 301 /directory/ /target/ # Not Valid Redirect 301 /main.html / # Not Valid Redirect 301 /directory/ http://www.example.com/target/ # Valid Redirect 301 /main.html http://www.example.com/ # Valid This contradicts the Apache documentation for Apache 2.2, which states: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. Of course I verified that we're using Apache 2.2 on both the old and the new server. The old server is a Gentoo box with Apache 2.2.11, while the new one is a RHEL 5 box with Apache 2.2.3. The workaround would be to change all paths to full URL's, or to convert the statements to mod_rewrite rules, but I'd prefer the documented behaviour. What are your experiences?

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • What are the "least legally restrictive" well-connected countries to host a website?

    - by monster
    NB: I am aware that this question is subjective, as it can't be defined precisely, but the answers should still be "objective": Country name, and what makes it legally safer. EDIT: A) I am located in Germany. B) I am NOT looking for a place to offer pirated Software/Media; no binary on my site, except "profile icon". Hello! I want to start publishing "social" websites / apps, and I found that the biggest initial problem is this: Any and all services I have to depend on, including Domain Registrar, DNS provider, Server/Cloud Provider, CDN Provider, ... even my Insurance Agent, basically say that they can "throw me out" if my website contains "unacceptable" content. It's always phrased in such a way that basically anything can fall under "unacceptable" content. This is very frustrating because you just can't fully control what users post on your "social website", and you so you basically have to expect when you go to bed that your site is going to be gone when you wake up. I've heard a lot of horror stories about this. Since the "Terms Of Service" of all those providers are foremost to protect themselves from legal actions, and those legal actions depend on the country where they are located, it seems like the first step is to find which country is the "safest" to locate a site. "Safest" being defined as, where I am least likely to get in legal trouble with the local authorities, if some user posts something unacceptable in some way. The main restriction is that it should also be a "well-connected" country, because there is no point in being "safe", if my users can't get to my sites, or the latency is unacceptable. I am targeting the English speaking people in any country as my future users.

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • How to debug modsecurity_audit_log

    - by max87
    I was accessing www.example.com/RestAPI/index.php/tweets.json in my server. The modsec_audit.log showed the following error, but there is no related errors/warnings in modsec_debug.log. I could see the Internal Server error is logged in example-error_log. How can I debug this Internal Server error? --8560e90b-A-- [21/Mar/2012:07:01:52 +0000] T2l84H8AAAEAAGxPZ@QAAAAG x.x.x.x 33101 x.x.x.x 80 --8560e90b-B-- GET /RestAPI/index.php/tweets.json HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (X11; Linux i686; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Cookie: __utma=159129855.1463065063.1331789485.1331789485.1331789485.1; __utmz=159129855.1331789485.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); 8cb6a414cf5ec1919864de0e80bea4da=0es7dcu0p10cocfpferb2lddi0; 8926e4f3c475bb6fcacb409299f1bd27=53cf8c5e6bf78ea45096945377e6d609 Connection: keep-alive Cache-Control: max-age=0 --8560e90b-F-- HTTP/1.0 500 Internal Server Error X-Powered-By: PHP/5.3.5 Content-Length: 0 Connection: close Content-Type: text/html; charset=UTF-8 --8560e90b-H-- Apache-Handler: php5-script Stopwatch: 1332313312358005 130428 (- - -) Producer: ModSecurity for Apache/2.5.12 (http://www.modsecurity.org/); core ruleset/2.0.5. Server: Apache --8560e90b-Z--

    Read the article

  • How to route between 2 networks with a server with 2 network cards?

    - by LumenAlbum
    This is the first time I am faced with routing and it seems I have hit a dead end. I have the following scenario: client1: 192.168.1.10 255.255.255.0 gateway: 192.168.1.100 DNS server: 192.168.1.100 client2: 192.168.1.20 255.255.255.0 gateway: 192.168.1.100 DNS server: 192.168.1.100 server (Windows Server 2008 R2 with enabled RAS & Routing Services) network card 1 (connected to a switch along with the clients) 192.168.1.100 255.255.255.0 DNS server: 127.0.0.1 network card 2 (connected to the router) 192.168.2.100 255.255.255.0 gateway: 192.168.2.1 DNS server: 127.0.0.1 (DNS forwarding to 192.168.2.1) ISP router (with connection to internet) 192.168.2.1 Now in this scenario I have tried to route traffic from the 192.168.1.0/24 network with the clients to the 192.168.2.0/24 network with the routers to connect them to the internet. However, no matter what I do I get no positive ping to the router 192.168.2.1. Ping from 192.168.168.1.10 to 192.168.1.20: Success to 192.168.1.100: Success to 192.168.2.100: Success to 192.168.2.1: not reachable The routing table contains the 2 routes 192.168.1.0 and 192.168.2.0 as directly connected. Does anyone know where the routing fails? I have searched different forums but mostly found nothing relevant. One post however pointed out that in a similar situation the problem was that the router doesn't know the way back and the internet router would need a static route back to the first router. If that really is the case, I take it there is no solution with my equipment, because the standart ISP router doesn't allow to set any static routes.

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • How can I "filter" postfix-generated bounce messages?

    - by Flimzy
    We are using postfix 2.7 and custom SMTPD (based on qpsmtpd) in highly customized configuration for spam filtering. We have a new requirement to filter postfix-generated bounces through our custom qpsmtpd process (not so much for content filtering, but to process these bounces accordingly). Our current configuration looks (in part) like this: main.cf (only customizations shown): 2526 inet n - - - 0 cleanup pickup fifo n - - 60 1 pickup -o content_filter=smtp:127.0.0.2 Our smtpd injects messages to postfix on port 2526, by speaking directly to the cleanup daemon. And the custom pickup command instructs postfix to hand off all locally-generated mail (from cron, nagios, or other custom scripts) to our custom smtpd. The problem is that this configuration does not affect postfix generated bounce messages, since they do not go through the pickup daemon. I have tried adding the same content_filter option to the bounce daemon commands, but it does not seem to have any effect: bounce unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 defer unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 trace unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 For reference, here is my main.cf file, as well: biff = no # TLS parameters smtpd_tls_loglevel = 0 smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache smtp_tls_security_level = may mydestination = $myhostname alias_maps = proxy:pgsql:/etc/postfix/dc-aliases.cf transport_maps = proxy:pgsql:/etc/postfix/dc-transport.cf # This is enforced on incoming mail by QPSMTPD, so this is simply # the upper possible bound (also enforced in defaults.pl) message_size_limit = 262144000 mailbox_size_limit = 0 # We do our own message expiration, but if we set this to 0, then postfix # will try each mail delivery only once, so instead we set it to 100 days # (which is the max postfix seems to support) maximal_queue_lifetime = 100d hash_queue_depth = 1 hash_queue_names = deferred, defer, hold I also tried adding the internal_mail_filter_classes option to main.cf, but also tono affect: internal_mail_filter_classes = bounce,notify I am open to any suggestions, including handling our current content-filtering-loop in a different way. If it's not clear what I'm asking, please let me know, and I can try to clarify.

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • Computer Won't Boot Properly, unless in safe mode?

    - by Mr_CryptoPrime
    I bought a computer today and booted it up, but when I did I only got a blank screen. I checked to make sure it wasn't the monitor by connecting it to my old computer...it worked. I then tried connecting my monitor to both DVI ports and found that the bottom one did work. However, now it just boots up and says "loading windows" and then when the login screen is suppose to come up the screen just goes blank and monitor says "no input, check cord" (or something like that). I tried reinstalling windows and then I was able to log on normally. I used the CD's and reinstalled all the drivers, then rebooted...now I am stuck right back where I started. I tried taking out the RAM and inserting into different slots, that didn't fix anything. I was able to boot up into windows using safe-mode. I suspected that my ATI Radeon 6950 was the issue and downloaded the drivers, but I can't install them on safe-mode. Someone said to install C++ distr. and I tried doing that to fix driver installation problem of "failed to load detection driver" but it wouldn't let me do that either. Please someone help me, I don't want to have to deal with the evil redtape of sending it back to get a replacement! My computer: -Content--text-_-"http://www.newegg.com/Product/Product.aspx?Item=N82E16883229236&nm_mc=TEMC-RMA-Approvel&cm_mmc=TEMC-RMA-Approvel--Content--text-_- Driver detection problem: http://www.hardwareheaven.com/hardwareheaven-tools-discussion/174912-failed-load-detection-driver-installation-error.html Driver download page: http://sites.amd.com/us/game/downloads/Pages/radeon_win7-64.aspx#1 I am using windows 7. Thanks again.

    Read the article

  • How to create a mysql database that can contain any character, also different languages

    - by Jakke
    I'm trying to create a database that has to contain articles in different languages. I'm using Mariadb as my server and I know bits of SQL. My knowledge doesn't really cover details like the differences between engines like MyISAM, InnoDB etc or character sets like utf8/16/32, latin 5/7/etc. I do know that the character set has importance, I guess what I'm looking for is an all-encompassing character set and an engine that best deals with this type of content. Also, is there an advantage in storing articles in multiple data rows (equivalent of different pages) to make things a little faster, or would you store a whole article in a single data row. Or does that depend on the size of the articles? Sorry for my noobish question, I know the information is all out on the internet but it would take me quite a long time to research and get a grip on everything. Would be cool if someone with experience could give me a little head start and point me in the right direction. This is for a intranet site, consider the content to be somewhat like a blog (and no, I don't want wordpress or something similar at this point). Not sure if it matters, but I tend to create and manipulate my tables with phpmyadmin, I use apache as web server and it all runs on Linux.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >