Search Results

Search found 111890 results on 4476 pages for 'git update server info'.

Page 2037/4476 | < Previous Page | 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044  | Next Page >

  • sharing a USB printer in SOHO environment [migrated]

    - by Registered User
    Here is a situation I am facing, there is USB printer which works only on a Windows XP machine, there are other devices in LAN it is a Small Office Home Office environment. How can this USB printer attached to Windows XP machine be shared so that other laptops or users in Network who have Windows 7 or Linux on their laptops can use this printer. The printer model number is Canon Laser Shot LBP-1210 http://www.canon-europe.com/For_Home/Product_Finder/Printers/Laser/LaserShot_LBP1210/index.asp a print server is not available to me I need to make it work in this situation only.What can I do? the clients are unable to connect to this.It is not a network or TCP/IP printer If a from Windows 7 machine some one wants to use this printer so that he can take a print he gets an error while adding the printer to his machine which is a Windows 7 machine (where as the printer is USB printer on Windows XP machine) Start--->Devices and Printers---> Add Printer---> Find Printer by name or IP address--->Selected a shared printer by name-->\\PC-Name-printer3 and select browse it gives a message Windows can not find a driver for Canon LASER SHOT LBP-1210 on the network what does this mean do I need to install some kind of software at client machine or on the machine where printer is present?

    Read the article

  • Let varnish send old data from cache while it's fetching a new one?

    - by mark
    I'm caching dynamically generated pages (PHP-FPM, NGINX) and have varnish in front of them, this works very well. However, once the cache timeout is reached, I see this: new client requests page varnish recognizes the cache timeout client waits varnish fetches new page from backend varnish delivers new page to the client (and has page cached, too, for the next request which gets it instantly) What I would like to do is: client requests page varnish recognizes the timeout varnish delivers old page to the client varnish fetches new page from backend and puts it into the cache In my case it's not site where outdated information is such a big problem, especially not when we're talking about cache timeout from a few minutes. However, I don't want punish user to wait in line and rather deliver something immediate. Is that possible in some way? To illustrate, here's a sample output of running siege 5 minutes against my server which was configured to cache for one minute: HTTP/1.1,200, 1.97, 12710,/,1,2013-06-24 00:21:06 ... HTTP/1.1,200, 1.88, 12710,/,1,2013-06-24 00:21:20 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:22:08 ... HTTP/1.1,200, 1.89, 12710,/,1,2013-06-24 00:22:22 ... HTTP/1.1,200, 1.94, 12710,/,1,2013-06-24 00:23:10 ... HTTP/1.1,200, 1.91, 12709,/,1,2013-06-24 00:23:23 ... HTTP/1.1,200, 1.93, 12710,/,1,2013-06-24 00:24:12 ... I left out the hundreds of requests running in 0.02 or so. But it still concerns me that there are going to be users having to wait almost 2 seconds for their raw HTML. Can't we do any better here? (I came across Varnish send while cache , it sounded similar but not exactly what I'm trying to do.)

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • apache access and error log written in same file

    - by user196075
    i have issue that access and error log are written in same file ! , the configuration in virtualhosts.conf as the following : <VirtualHost *:80> ServerName ************ ServerAdmin support@************8 DocumentRoot /var/www/html/*********.com ErrorLog /var/log/httpd/********/********.com_error_log CustomLog /var/log/httpd/********/********.com_access_log combined <Directory /var/www/html/***********.com> Options -Indexes FollowSymLinks AllowOverride All </Directory> </VirtualHost> as you see from the configuration each access and error logs should be save separately , but both logs are written in *.com_access_log , i have double check all permission , group and owner ... can't find anything wrong previous error in log file : [Thu Sep 19 14:15:02 2013] [error] [client 192.168.10.54] client denied by server configuration: /var/www/html/**********/show_has_offers.php i have tried to generate same error , i can find the hit in access log only as the following : 192.168.10.75 - - [24/Oct/2013:08:11:14 +0000] "GET /show_has_offers.php HTTP/1.1" 404 1586 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0" 0 17332 and nothing in error log !! Please advice ...

    Read the article

  • Expanding raidz vdev

    - by Blubber
    I'm currently planning on installing FreeBSD 9 on my home server. The machine has 4x 1.5TB disks, and at some point, when HDD prices drop I'll be upgrading to something bigger, perhaps 3TB. The disks are connected to an IBM ServerRaid m1015 in IT mode, this card has room for up to eight disks. Now here is the problem, currently the 4x 1.5TB will be connected to the m1015. Then when prices drop I'll be adding something like 4x 3TB, also connected to the m1015. No problem yet, I can just run 2 raidz2 vdevs and put them in the same pool. But, at some point the 1.5TBs will start to break, or I will have to upgrade them when the pool runs out of space. So I started researching if it's possible to expand a raidz vdev, and I found several pages explaining the same procedure, like this on SF: How to upgrade a ZFS RAID-Z array to larger disks on OpenSolaris?. So I went a head and tried that in vmware, I installed FreeBSD 9 and created 6 virtual disks, 3 of 1GB each and 3 of 10GB each. After building a raidz vdev of the 1GBs I replaced them one by one with the 10GB, but the pool did not increase in size. Is this a limitation of the ZFS implementation in FreeBSD? Or am I just doing something wrong?

    Read the article

  • How do I rsync an entire folder based on the existence of a specific file type in that folder

    - by inquam
    I have a server set up that receives movies to a folder. I then serve these movies using DLNA. But in the initial folder where they end up all kind of files end up. Pictures, music, documents etc. I thought I'd fix this by running the following script inside that folder rsync -rvt --include='*/' --include='*.avi' --include='*.mkv' --exclude='*' . ../Movies/ This works and scans the given folder and moves all the found movies of the given extension types to the Movies folder. But I wonder if there is anyway to tell rsync to if a folder if found that includes a movie of the given extension types, sync the entire folder. Including other files such as .srt. This is to make it easier for me to get subtitles moved along with the movie. I have a solution figured out via a script made in php (yea, I actually do most of my scripting in linux using php... just a habbit that stuck a long time ago). But if rsync can handle it from the start that would be super. Also, I have noticed that this line of rsync actually copies all the root folders in the given folder. If no movie is in the folder it will create an empty folder. How do I prevent rsync from doing this... and saving me the trouble of deleting all folder in Movies that are empty.

    Read the article

  • file read performance degrades as number of files increases

    - by bfallik-bamboom
    We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files. The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings. Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s. Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue? Thanks.

    Read the article

  • C#: Label contains no text

    - by Vinzcent
    Hey I would like to get the text out of a label. But the label text is set with Javascript. On the page I can see that there is text in the label, but when I debug it shows this: "". So how do I get the text out of a label that is set with Javascript, at least that is what I think is the problem. My code: <asp:TextBox ID="txtCount" runat="server" Width="50px" Font-Names="Georgia, Arial, sans-Serif" ForeColor="#444444"></asp:TextBox> <ajaxToolkit:NumericUpDownExtender ID="NumericUpDownExtender1" runat="server" Minimum="1" TargetButtonDownID="btnDown" TargetButtonUpID="btnUp" TargetControlID="txtCount" Width="20" /> <asp:ImageButton ID="btnUp" runat="server" AlternateText="up" ImageUrl="Images/arrowUp.png" OnClientClick="setAmountUp()" ImageAlign="Top" CausesValidation="False" /> <asp:ImageButton ID="btnDown" runat="server" AlternateText="down" ImageUrl="Images/arrowDown.png" OnClientClick="setAmountDown()" ImageAlign="Bottom" CausesValidation="False" /> <asp:Label ID="lblKorting" runat="server" /> <asp:Label ID="lblAmount" runat="server" /> <asp:Button ID="btnBestel" runat="server" CssClass="btn" Text="Bestel" OnClick="btnBestel_Click1" /> JS function setAmountUp() { var aantal = document.getElementById('<%=txtCount.ClientID%>').value-0; aantal+=1; calculateAmount(aantal); } function setAmountDown() { var aantal = document.getElementById('<%=txtCount.ClientID%>').value-0; if(aantal > 1) aantal -=1; calculateAmount(aantal); } function calculateAmount(aantal) { var prijs = document.getElementById('<%=lblPriceBestel.ClientID%>').innerHTML -0; var totaal = 0; if(aantal < 2) { totaal = prijs * aantal; document.getElementById('<%=lblKorting.ClientID%>').innerHTML = ""; } else if(aantal >= 2 && aantal < 5) { totaal = (prijs * aantal)*0.95; document.getElementById('<%=lblKorting.ClientID%>').innerHTML = "-5%"; } else if(aantal >= 5) { totaal = (prijs * aantal)*0.90; document.getElementById('<%=lblKorting.ClientID%>').innerHTML = "-10%"; } document.getElementById('<%=lblAmount.ClientID%>').innerHTML = totaal; } C# private OrderBO bestelling; protected void btnBestel_Click1(object sender, EventArgs e) { bestelling = new OrderBO(); bestelling.Amount = Convert.ToInt32(lblAmount.Text); //<--- THIS IS "" in the debugger, but on the page 10 }

    Read the article

  • Linux 2.6.24-gentoo-r3-comtrance on x86_64 high Useage for unknown reasons

    - by Dorjan
    Hello everyone, I'm a complete rookie when it comes to all things Linux related so please treat me as such and assume I know nothing. That being said my Top says this: top - 12:08:03 up 11 days, 15:36, 0 users, load average: 5.47, 5.53, 5.46 Tasks: 296 total, 2 running, 294 sleeping, 0 stopped, 0 zombie Cpu(s): 6.3%us, 1.4%sy, 0.0%ni, 71.3%id, 20.6%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 8176880k total, 8118236k used, 58644k free, 89312k buffers Swap: 1004052k total, 0k used, 1004052k free, 7235652k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1229 root 15 -5 0 0 0 D 1 0.0 199:28.63 kjournald 2946 root 20 0 1716 676 552 D 1 0.0 145:02.94 syslogd 14553 root 20 0 2644 1268 876 R 1 0.0 0:00.34 top 14609 postfix 20 0 7896 1884 1460 D 1 0.0 0:00.02 bounce 14630 postfix 20 0 7896 1876 1452 R 0 0.0 0:00.00 bounce And my hard drives says: > df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 4925556 4474836 200508 96% / /dev/sda5 489992 36090 428602 8% /tmp /dev/sda6 377951852 236171160 122581816 66% /var none 4088440 0 4088440 0% /dev/shm It has been like it for a few days now... I know not what is causing the high server load (Normally around 1.3) can anyone give any tips on how to track down the culprit? Many thanks,

    Read the article

  • Exchange 2010 mail routing with Hub Transport in multiple sites

    - by jmreicha
    I have two separate physical sites, Site A and Site B. In site A, I have following: 2 CAS servers 2 Hub Transport servers 2 Mailbox servers 2 Edge servers In site B, I have the following: 1 CAS 1 Hub 1 Mailbox 1 Edge Currently everything is working out of site A. That is, all users are housed on mailboxes that are in site A and all inbound mail flow is pointing to site A. I would eventually like to be able to move some of the mailboxes to site B without causing a disruption for resliency and redundancy purposes but I am not quite sure how to go about setting this up or if it is even possible. So far I have created an Edge subscription in site B and am able to send emails out from test accounts set up with mailboxes on the site B Mailbox server. However, I am unable to receive incoming mail messages and am confused. So I'm thinking incoming mail messages are still being directed to site A and then they are getting stuck because there is no way to route the mail to the site B mailboxes. Is this assumption correct? I am unfamiliar with mail flow and routing so I am not really sure what I need to be looking at? Would I add the site B hub transport to the Edge subscription in site A? Or I guess more specifically, how would I go about enabling communication and mail flow between mailboxes split up on site A and B?

    Read the article

  • How do I connect to and factory reset a Catalyst 3560 Switch?

    - by Josh
    My company just bought another company. In their server room they had some older hardware, which I would like to repurpose. One of these is a Cisco Switch: C3560G-48TS-S. I found some instructions about this switch here but this is not a guide for a beginner. I have no idea how to connect to this thing to begin running the commands. It says Configure the PC terminal emulation software for 9600 baud, 8 data bits, no parity, 1 stop bit, and no flow control. But I can't find anything on how to do this (assuming with telnet?) or even what program to use. I also don't know how to find the IP address of the device to connect to it. My research also says once I get in there, I need to run clear config all Is this the right command? Also, what if I can't get the username and password for these devices? Is there some way to factory reset (my only experience is with devices that have a hardware reset button) EDIT: I should note that when I push the button on the front the three lights blink, which according to the documentation indicated the switch is configured and "not available for express setup"

    Read the article

  • Postfix not delivering from external senders and not logging anything

    - by simendsjo
    Some semi-recent upgrades must have broken my postfix+dovecot configuration, but I'm having problems finding out what the cause is. My domain is simendsjo.me with the MX record mail.simendsjo.me. I can send mail to both local and external recipients, and it delivers mail from internal mailboxes. The problem is that mail from external senders isn't delivered, and nothing is logged at all. The external sender also doesn't receive any errors. I have no idea where to ever start looking as nothing is logged at all when external mail is sent to my server. So the first issue would be: How can I turn on some debug messages for postfix? I've tried: debug_peer_level = 2 debug_peer_list = simendsjo.me .. And _level = 999 and _list = gmail.com where I'm trying to send emails from. but nothing is logged. When sending mails from a local mailbox (but from an outside computer, not localhost), a lot is logged. I don't have any rules in iptables either. Any ideas how I can get some debug messages for postfix?

    Read the article

  • How can one restrict network activity to only the VPN on a Mac and prevent unsecured internet activity?

    - by John
    I'm using Mac OS and connect to a VPN to hide my location and IP (I have the 'send all traffic over VPN connection' box checked in teh Network system pref), I wish to remain anonymous and do not wish to reveal my actual IP, hence the VPN. I have a prefpan called pearportVPN that automatically connects me to my VPN when I get online. The problem is, when I connect to the internet using Airport (or other means) I have a few seconds of unsecured internet connection before my Mac logs onto my VPN. Therefore its only a matter of time before I inadvertently expose my real IP address in the few seconds it takes between when I connect to the internet and when I log onto my VPN. Is there any way I can block any traffic to and from my Mac that does not go through my VPN, so that nothing can connect unless I'm logged onto my VPN? I suspect I would need to find a third party app that would block all traffic except through the Server Address, perhaps Intego Virus Barrier X6 or little snitch, but I'm afraid I'm not sure which is right or how to configure them. Any help would be much appreciated. Thanks!

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Apache on CentOS 5.9 VM serves my optimized images corrupted (but my Mac doesn't)

    - by Robert K
    I'm using a Vagrant VM to mirror the client's environment as closely as I can. As part of our build process we do no optimization of assets early on; that comes as we're ready to take a site live. Needless to say, this issue is beginning to worry me as we need to take the site live very soon. I use ImageOptim to automate optimization of image assets, which runs a whole series of tools (Zopfli, PNGOUT, OptiPNG, AdvPNG, PNGCrush). I always set the optimizations to their maximum setting. After optimization, my PNGs start looking like this: What's weird is, if I serve the same file through my Mac's copy of Apache, not through Vagrant, the image loads fine. In fact, the only time it's ever corrupt like this is when the image is served from the Vagrant VM and its install of Drupal. All optimized JPEGs display only the first ~20% of the image. And PNGs, depending on the image, may show either a portion or the "progressive"-style corruption below. The browser itself makes no difference, the same browser will serve an uncorrupted image from my Mac's Apache instance and a corrupt image from the VM. When I disable all PNG optimizations except PNGCrush, and the removal of the PNG metadata, the image is served corrupted. I'm optimizing JPEG images with JPEGmini. The server is running CentOS 5.9, Apache 2.2.3-85, PHP 5.3.3, and Drupal 7. As best as I can tell the error lies somewhere within the VM, either with Apache or with (perhaps) the network stack. Seems like the tools that optimize the compression of the PNGs and JPEGs are what trigger this error. I've already determined that the .htaccess file isn't interfering with how the images load. What should I try to troubleshoot this?

    Read the article

  • SSH & SFTP: Should I assign one port to each user to facilitate bandwidth monitoring?

    - by BertS
    There is no easy way to track real-time per-user bandwidth usage for SSH and SFTP. I think assigning one port to each user may help. Idea of implementation Use case Bob, with UID 1001, shall connect on port 31001. Alice, with UID 1002, shall connect on port 31002. John, with UID 1003, shall connect on port 31003. (I do not want to lauch several sshd instances as proposed in question 247291.) 1. Setup for SFTP: In /etc/ssh/sshd_config: Port 31001 Port 31002 Port 31003 Subsystem sftp /usr/bin/sftp-wrapper.sh The file sftp-wrapper.sh starts the sftp server only if the port is the correct one: #!/bin/sh mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -eq $current_port ] then exec /usr/lib/openssh/sftp-server fi 2. Additional setup for SSH: A few lines in /etc/profile prevents the user from connecting on the wrong port: if [ -n "$SSH_CONNECTION" ] then mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -ne $current_port ] then echo "Please connect on port $mandatory_port." exit 1 fi fi Benefits Now it should be easy to monitor per-user bandwidth usage. A Rrdtool-based application could produce charts like this: I know this won't be a perfect calculation of the bandwidth usage: for example, if somebody launches a bruteforce attack on port 31001, there will be a lot of traffic on this port although not from Bob. But this is not a problem to me: I do not need an exact computation of per-user bandwidth usage, but an indicator that is approximately correct in standard situations. Questions Is the idea of assigning one port for each user is a good one? Is the proposed setup an reliable one? If I have to open dozens of ports for many users, should I expect a performance drawback? Do you know a rrdtool-based application which could make the chart above?

    Read the article

  • Shell script to block proftp failled attempt

    - by Saif
    Hello, I want to filter and block failed attempt to access my proftp server. Here is an example line from the /var/log/secure file: Jan 2 18:38:25 server1 proftpd[17847]: spy1.XYZ.com (93.218.93.95[93.218.93.95]) - Maximum login attempts (3) exceeded There are several lines like this. I would like to block any attempts like this from any IP twice. Here's a script I'm trying to run to block those IPs. tail -1000 /var/log/secure | awk '/proftpd/ && /Maximum login/ { if (/attempts/) try[$7]++; else try[$11]++; } END { for (h in try) if (try[h] > 4) print h; }' | while read ip do /sbin/iptables -L -n | grep $ip > /dev/null if [ $? -eq 0 ] ; then # echo "already denied ip: [$ip]" ; true else logger -p authpriv.notice "*** Blocking ProFTPD attempt from: $ip" /sbin/iptables -I INPUT -s $ip -j DROP fi done how can I select the IP with "awk". with the current script it's selecting "(93.218.93.95[93.218.93.95])" this line completely. But i only want to select the IP.

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • stopping fastcgi_cache for php-enabled custom error page

    - by Ian
    Since enabling the fastcgi_cache on my nginx server, my php-enabled custom error page has suddenly stopped working and I'm getting the internal 404 message instead. In nginx.conf: fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 loader_threshold=2000; map $http_cookie $no_cache { default 0; ~SESS 1; } fastcgi_cache_key "$scheme$request_method$host$request_uri"; add_header X-My-Cache $upstream_cache_status; map $uri $no_cache_dirs { default 0; ~^/(?:phpMyAdmin|rather|poll|webmail|skewed|blogs|galleries|pixcache) 1; } the cache relevant stuff in my fastcgi.conf: fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass $no_cache $no_cache_dirs; fastcgi_no_cache $no_cache $no_cache_dirs; fastcgi_cache_valid 200 301 5m; fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires; expires epoch; fastcgi_cache_lock on; If I disable the fastcgi_cache, the php-enabled 404 page works as it has for years. How would I disable the cache for the custom error page?

    Read the article

  • Mixing both local and nonlocal addresses on three switches

    - by klew
    I have four computers that have nonlocal addresses like 150.X.X.X. Now I also get another few computers that should be only accessible through a gateway (it will be computing cluster) and they addresses are 10.0.0.X. I also wanted to include those four older computers to this new cluster, but I want them to be accessible from internet on nonlocal addresses (so I would like to set up them on both 150.X.X.X and 10.0.0.X addresses - I've set up it as interface eth0:0 since I have only one NIC). Those new computers have their switch and old computers also have their own switch. Both of them are connected to another (third) switch. The problem is that those old computers see each other (I can ping them), and also new computers see each other, but I can't ping old computer from new computer and vice versa. However pinging on nonlocal adresses works as expected. I looked into switch configuration and didn't find anything useful. I have no idea what I missed here. Can somebody help? All computers have Ubuntu Server 10.04

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • Nginx: Rewriting directory path to file

    - by Doug
    I'm a little new to Nginx here so bear with me - I want to rewrite a url like foo.bar.com/newfoo?limit=30 to foo.bar.com/newfoo.php?limit=30. Seems pretty simple to do it something like this rewrite ^([a-z]+)(.*)$ $1.php$2 last; The part that I am confused about is where to put it - I've tried my hand at a some location directives but I'm doing it wrong. Here's my existing virtual host config, where should I implement my rewrite? server { listen 80; listen [::]:80; server_name foo.bar.com; root /home/foo; index index.php index.html index.htm; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; } } Thanks!

    Read the article

  • Good Hosting Providers With Zend Framework Support [closed]

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for?

    Read the article

  • Website always having DNS problems

    - by Root
    I moved my website from shared hosting to VPS. When it was in shared hosting all I did is updated my name servers whereas now I got my own VPS server and I used one of my domain sjdpublishing.com as the primary domain for my VPS. I created nameservers as ns1.sjdpublishing.com and ns2.sjdpublishing.com and then my actual website is creativeproperty.com.au which are pointing to ns1.sjdpublishing.com and ns2.sjdpublishing.com I am having repeated problems with my domain creativeproperty.com.au a few weeks back I had a problem which was resolved by flushing DNS and later I got similar problem which was not resolved by flushing DNS, I posted a question here and someone answered me to go to Network Settings in my MAC OSX and remove the IP as in my MAC terminal nslookup creativeproperty.com.au points to my router IP and I fixed this problem Now many of my clients were complaining that they are having same troubles accessing my website. I don't know whether its to flush DNS or change network settings or other issues. Can anyone please check my domain creativeproperty.com.au and sjdpublishing.com are having correct records or not and also can anyone tell me the best solution for this issue?

    Read the article

  • AWS instance went down for restart, came back up and has been "Waiting for network configuration... " for 12 hours

    - by kavisiegel
    What happened was about a month ago, I created a new instance, re-configured everything, and mounted the old instances drive to the new one. I then detached the old drive from the instance, but it was still mounted, so it errored out. Now, come the outage last month, and when the instance boots back up - the drive isn't there, because in the downtime, it dismounted. It hangs because the fstab is looking for it. I get word that the server's not up today, I check the logs, re-attach the drive and restart. Now I'm getting "Waiting for network configuration..." and it doesn't seem to be doing anything. Some googling told me that I'm going to need to start from zero again, which I don't think is right. I created a new instance, which booted no problem, then I stopped it and swapped the two old drives into the new instance. Still the same error. Using the fresh AMI, it worked. I figure it's just a misconfigured Amazon file. I can attach the drive to a functioning instance on the same AMI and copy some files over, then try again - but I don't know where to even start or what files to check.

    Read the article

< Previous Page | 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044  | Next Page >