Search Results

Search found 17420 results on 697 pages for 'static urls'.

Page 266/697 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Squid - Selective reverse proxy and forward proxy

    - by Dean Smith
    I'd like to setup a squid instance to do selective reverse proxy for a configured list of URLs whilst acting as a normal forward proxy for everything else. We are building new infrastructure, parallel live as it where, and I want to have a proxy that people can use that will force selective traffic into the new platform whilst just acting as a forward proxy for anything else. This makes it very easy for people/systems to test the portions of the new platform we want without having to change too much, just use a proxy address. Is such a setup possible ?

    Read the article

  • nginx rewrite rule for using domain host to redirect to specific internal directory

    - by user85836
    I'm new to Nginx rewrites and looking for help in getting a working and minimal rewrite code. We would like to use urls like 'somecity.domain.com' on campaign materials and have the result go to city-specific content within the 'www' site. So, here are use cases, if the customer enters: www.domain.com (stays) www.domain.com domain.com (goes to) www.domain.com www.domain.com/someuri (stays the same) somecity.domain.com (no uri, goes to) www.domain.com/somecity/prelaunch somecity.domain.com/landing (goes to) www.domain.com/somecity/prelaunch somecity.domain.com/anyotheruri (goes to) www.domain.com/anyotheruri Here's what I've come up with so far, and it partially works. What I can't understand is how to check if there is no path/uri after the host, and I'm guessing there is probably a way better way to do this. if ($host ~* ^(.*?)\.domain\.com) { set $city $1;} if ($city ~* www) { break; } if ($city !~* www) { rewrite ^/landing http://www.domain.com/$city/prelaunch/$args permanent; rewrite (.*) http://www.domain.com$uri$args permanent; }

    Read the article

  • I am trying to rewrite a few links with htaccess

    - by Thorpe Obazee
    I have a few URLs and I need them to be rewrite'd to the ones below: http://domain.net/blog/posts http://domain.net/blog/posts/index http://domain.net/blog/posts/view/uri/non-working-holiday http://domain.net/blog/posts/view/uri/we-no-longer-offer http://domain.net/blog/posts/view/uri/festivals http://domain.net/blog/posts/view/uri/christmas-is-just-around-the-corner http://domain.net/posts/ http://domain.net/posts/index http://domain.net/posts/view/uri/non-working-holiday http://domain.net/posts/view/uri/we-no-longer-offer http://domain.net/posts/view/uri/festivals http://domain.net/posts/view/uri/christmas-is-just-around-the-corner I was hoping that my .htaccess will fix this but it doesn't: Options +FollowSymLinks IndexIgnore */* RewriteEngine on RewriteRule ^blog\/(.*)$ posts\/$1 [NC] # if a directory or a file exists, use it directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # otherwise forward it to index.php RewriteRule . index.php

    Read the article

  • Does aria2 support write small files in batch?

    - by Jon
    I'm using aria2 to download 8 million jpg from flickr. Each image is about 100KB. I got a list of urls of these images in a txt file, the format is: http://farm2.staticflickr.com/1070/1151334893_5a8e7f77f4.jpg I'm wondering whether aria2 support writing small files in batch? Say write 100 image to disk when all of them are download in the memory, not just write every single file when the download is finished. Because I think writing in batch will better protect my hard disk. Or do you have other software or opensource code to recommend?

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • How can we monitor a HTTPS URL?

    - by Animesh
    A couple of our recent customers have had their applications configured for HTTPS only. Currently we are using a tool which does a good job of monitoring customers' app-server state. For the existing customers, HTTP URLs also work, so the tool can monitor the health. But the recent one have only the HTTPS enabled so the tool fails automatically. To this end, I am looking for a tool which would monitor the app-server state and send email to the group. Simple monitoring like checking to see if the app-server is up or not is all I would need. But more features are also definitely helpful. Thanks!

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • Save a single web page (with background images) with Wget

    - by mikael
    I want to use Wget to save single web pages (not recursively, not whole sites) for reference. Much like Firefox's "Web Page, complete". My first problem is: I can't get Wget to save background images specified in the CSS. Even if it did save the background image files I don't think --convert-links would convert the background-image URLs in the CSS file to point to the locally saved background images. Firefox has the same problem. My second problem is: If there are images on the page I want to save that are hosted on another server (like ads) these wont be included. --span-hosts doesn't seem to solve that problem with the line below. I'm using: wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://domain.tld/webpage.html

    Read the article

  • Protect apache pages by URL

    - by Thomas
    Is it possible to allow access to specific URLs only to certain networks? Basically, I would like to restrict access to the admin area only to the local network This area's pages are prefixed by /admin Essentially, I would like all /admin/* to be forbidden from public access. Can apache handle such a case? Thanks UPDATE Using your suggestions I came up to <LocationMatch admin> Order allow,deny deny from all Allow From 192.168.11.0/255.255.255.0 </LocationMatch> However, I get 403 even though I am on the network. Additionally, if I put apache behind haproxy, is this going to work? Because the traffic will be coming from 127.0.0.1 to apache

    Read the article

  • How to use SSL on AWS EC2

    - by Aubada Taljo
    Hello I have an AWS EC2 account and I am running an instance that serves as a web host for my PHP website... This is a private website that has no UI but only URLs to be requested by my other software to get some response from the server... I want the requests (that I send to the server) to be secured so I want to use https instead of http... so what should I do to achieve that? PS: I found this link while searching... but I don't know how useful it's in my situation http://matt-darby.com/posts/690-aws-ec2-and-ssl Thanks in advance Good luck

    Read the article

  • How to route between 2 networks with a server with 2 network cards?

    - by LumenAlbum
    This is the first time I am faced with routing and it seems I have hit a dead end. I have the following scenario: client1: 192.168.1.10 255.255.255.0 gateway: 192.168.1.100 DNS server: 192.168.1.100 client2: 192.168.1.20 255.255.255.0 gateway: 192.168.1.100 DNS server: 192.168.1.100 server (Windows Server 2008 R2 with enabled RAS & Routing Services) network card 1 (connected to a switch along with the clients) 192.168.1.100 255.255.255.0 DNS server: 127.0.0.1 network card 2 (connected to the router) 192.168.2.100 255.255.255.0 gateway: 192.168.2.1 DNS server: 127.0.0.1 (DNS forwarding to 192.168.2.1) ISP router (with connection to internet) 192.168.2.1 Now in this scenario I have tried to route traffic from the 192.168.1.0/24 network with the clients to the 192.168.2.0/24 network with the routers to connect them to the internet. However, no matter what I do I get no positive ping to the router 192.168.2.1. Ping from 192.168.168.1.10 to 192.168.1.20: Success to 192.168.1.100: Success to 192.168.2.100: Success to 192.168.2.1: not reachable The routing table contains the 2 routes 192.168.1.0 and 192.168.2.0 as directly connected. Does anyone know where the routing fails? I have searched different forums but mostly found nothing relevant. One post however pointed out that in a similar situation the problem was that the router doesn't know the way back and the internet router would need a static route back to the first router. If that really is the case, I take it there is no solution with my equipment, because the standart ISP router doesn't allow to set any static routes.

    Read the article

  • Apache rewrite rule to remove index.php and direct certain areas to https

    - by Stephen Martin
    I have a codeignitor application running on Apache2, I have managed to remove the index.php from the urls with this .htaccess RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] now I want to make certain parts of the site redirect to https, I tried this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] RewriteRule ^/?cpanel/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] RewriteRule ^/?login/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] But it doesn't work. I have to say when it comes to Apache rewrites im a noob. I can't find any tutorials on how to remove index.php and rewrite/redirect certain parts of the site to https. Any ideas, Thanks.

    Read the article

  • Remove trailing slash using redirect directive in vhost

    - by Choy
    I have an issue where urls that end in a "/" after a file name causes css/js to break. I.e., http://www.mysite.com/index.php/ <-- breaks http://www.mysite.com/ <-- OK, only breaks for file names To fix, I tried adding a Redirect 301 directive in the vhost file as such where I'm checking to see if there's an extension with a slash after it: <VirtualHost *:80> ServerName mysite.com Redirect 301 ^(.*?\..+)/$ http://mysite.com/$1 </VirtualHost> The redirect appears to do nothing. Is this an issue with my implementation or is what I'm trying to accomplish not possible with a Redirect 301 in the vhost file?

    Read the article

  • Unable to connect to server after a certain amount of time

    - by Troy
    I am a business FIOS subscriber with 5 static IPs. I have the following network setup: Verizon provided ONT Dlink switch Dell server running Ubuntu 12.04 with iptables enabled and a static IP address. The makes/models of hardware are: FIOS ONT Alcatel-Lucent I-211M-H ONT D-Link D-Link Web Smart Switch DES-1228P Server Dell Optiplex 755 (Ubuntu 12.04 Server) I have iptables running on the server with http, https and ssh ports open. I can connect to a website on the server from an external computer, but after a certain amount of time (mins to hours), I can no longer connect. All I have to do to re-enable connectivity is connect to the server via SSH from a computer INSIDE the network. I don't have to actually login, I just have to establish a connection. I can then access the website externally again. I did some googling and it seems some of verizon's equipment had an ARP bug where the ARP entries would expire after a certain time period, but those issues all seem to be from back in 2009 - 2010. I know the switch has an 'auto learning Mac address' feature, but I'm not sure if that could be the problem or not. Does anyone have any ideas or advice on how I can troubleshoot this? Thanks!

    Read the article

  • esx5 debian VM vlan setup

    - by Kstro21
    i have a server with ESX5, have a switch with about 20 vlans, this is how setup the trunk port interface GigabitEthernet0/1/1 description ToOper port link-type trunk undo port trunk allow-pass vlan 1 port trunk allow-pass vlan 2 to 14 stp disable ntdp enable ndp enable bpdu enable then, i created a standar switch(sw1) using the vSphere Client, the VLAN ID is set to All (4095), i also created a VM with Debian 6, with a NIC connected to sw1, now, i want to configure this NIC for a selected group of vlans auto vlan10 iface vlan10 inet static address 11.10.1.0 netmask 255.255.255.224 mtu 1500 vlan_raw_device eth0 auto vlan14 iface vlan14 inet static address 11.10.1.65 netmask 255.255.255.248 mtu 1500 vlan_raw_device eth0 so, when i restart the network using /etc/init.d/networking restart, i got this error Reconfiguring network interfaces...SIOCSIFADDR: No such device vlan14: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device SIOCSIFBRDADDR: No such device vlan14: ERROR while getting interface flags: No such device SIOCSIFMTU: No such device vlan14: ERROR while getting interface flags: No such device Failed to bring up vlan14. done. this is just part of the error, so, my questions is: is this possible?, i mean, what i'm trying to achieve using ESX Virtual Machines, VLANS, etc is this a Debian problem? can be solved? i've read about a file named z25_persistent-net.rules in Debian but it doesn't exist in my installation. in the In the vSphere Networking for ESX5 guide, you can read: If you enter 0 or leave the option blank, the port group can see only untagged (non-VLAN) traffic. If you enter 4095, the port group can see traffic on any VLAN while leaving the VLAN tags intact. So, in theory, it should work, right? Hope you can help me up with this one Thanks

    Read the article

  • Nginx + PHP-FPM ignores no-cache headers

    - by Eric Winchell
    I'm using the following header on a php page. // Prevent page caching. header('Expires: Tue, 20 Oct 1981 05:00:00 GMT'); header('Cache-Control: no-store, no-cache, must-revalidate'); header('Cache-Control: post-check=0, pre-check=0', FALSE); header('Pragma: no-cache'); I'm also using a rand=999999999 (with a real random number) in the URLs. But pages are still being cached. Reload works, but first load is cached. Anyone know where I can change this?

    Read the article

  • RewriteRule in htaccess in subdirectory

    - by Jay
    Windows server, running Apache. In my Apache conf, I have AllowOverride None for the root of a site and then I have a subdirectory set to AllowOverride All: <Directory /> AllowOverride None </Directory> <Directory "/safe/"> AllowOverride All </Directory> However, when I try to set up a rewrite rule in the subdirectory's htaccess file, nothing happens, I just get a 404 page not found error. Example: RewriteEngine On RewriteRule (.*) /blah?test=$1 [R=302,NC,NE,L] Rwewriting URLs are working fine from the root via the Apache conf. I don't understand why the rule is ignored. I don't want to do the URL re-writing within the conf because for this case I may need to be changing the redirects constantly and don't want to reload the server every time a change is made. I also don't want to affect server performance by enabling htaccess files site-wide, just in the subdirectory I need it.

    Read the article

  • Yum through http proxy

    - by eodchop
    Hello, I have several Fedora 13 servers that have to connect through an http proxy for yum updates. All port 80 traffic has to be routed through this proxy. I have setup the proxy server in the network settings GUI. I can browse the internet just fine. I have also setup my proxy information in /etc/yum.conf as follows: proxy=http:proxy.largecorp.corp/accelerated_pac_base.pac proxy_user=user proxy_password=password I then added the export HTTP_PROXY="http:proxy.largecorp.corp/accelerated_pac_base.pac" to /etc/bashrc and sourced the file. When i run yum update: Loaded plugins:presto, refresh-packagekit Error: Cannot retrieve repository metadata (repomd.xml) fro repository: fedora. Please verify its path and try again. All of the repo urls are the defaults, as this is a fresh install.

    Read the article

  • Apache .htaccess problem: No input file specified.

    - by Michal M
    Hello Everyone, Can someone help me with this. I'm feeling like I've been hitting my head against a wall for over 2 hrs now. I've got Apache 2.2.8 + PHP 5.2.6 installed on my machine and the .htacces with code below works fine, no errors. RewriteEngine on RewriteCond $1 !^(index\.php|css|gfx|js|swf|robots\.txt|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] The same code on my hosting provider server gives me a 404 error code and outputs only: No input file specified. index.php IS there. I know they have Apache installed (cannot find version info anywhere) and they're running PHP v5.2.8. I'm on windows xp 64-bit, they're running some Linux and php in cgi/fastcgi mode. Can anyone suggest what could be the problem? PS. if that's important that's for CodeIgniter to work with friendly URLs.

    Read the article

  • Using a nat rule to translate 80/443 traffic to web server, but internal users cannot access it using external ip/domain name

    - by Josh
    I am using Cisco ASDM for ASA I have my internal network called soa. My outside interface is called outside. Let's say my outside IP given to me by my ISP isp is y.y.y.y I have a web server inside my network with a static ip of x.x.x.110. I have configured 2 static nat rules (one for http the other for https). Source is x.x.x.110. Interface is outside, service (http or https). Maybe I am doing this wrong, but when I run the packet tracer, I choose outside interface and for the source IP I used 8.8.8.8 and the destination ip is my outside IP address, y.y.y.y When I run that, it shows the packet traversing successfully, using 9 steps. For my other test, I switch to the soa interface, input an ip on that network, and leave the destination the same. This test comes up with 2 steps and then fails on my access list. When I see the rule that fails, it is my catch all which is source: any desitnation: any, service: ip action: deny. What rule do I need to make to allow my soa network access to go out and come back in by my external IP addess (using a domain name attached to that ip in my dns, of course)?

    Read the article

  • Opening Word documents from IE LAG Windows 7 IE8

    - by Steve McCall
    Hi, I'm having a lot of trouble opening documents from a network share in word using IE. The documents are located in a network share which is mapped to a virtual directory. The documents are accessed by URLs that link to the virtual directory. There is now a huge lag (sometimes up to a minute or two!) from when clicking on the link to the document opening in word. The 'loading disc' in IE just keeps spinning and nothing happens. Sometimes a pop up box appears with 'opening file - (address)' but it still takes ages. I've tried setting in the registry to open the files directly in ie but to no avail. Anyone have any ideas? Steve

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >