Search Results

Search found 70727 results on 2830 pages for 'abhi iphone blackberrywww developerit com'.

Page 1682/2830 | < Previous Page | 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689  | Next Page >

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • Comparing Nginx+PHP-FPM to Apache-mod_php

    - by Rushi
    I'm running Drupal and trying to figure out the best stack to serve it. Apache + mod_php or Nginx + PHP-FPM I used ApacheBench (ab) and Siege to test both setups and I'm seeing Apache performing better. This surprises me a little bit since I've heard a lot of good things about Nginx + PHP-FPM. My current Nginx setup is something that is a bit out of the box, and the same goes for PHP-FPM What optimizations I can make to speed up the Nginx + PHP-FPM combo over Apache and mo_php ? In my tests using ab, Apache is outperforming Nginx significantly (higher requets/second and finishing tests much faster) I've googled around a bit, but since I've never using Nginx, PHP-FPM or FastCGI, I don't exactly know where to start PHP v5.2.13, Drupal v6, latest PHP-FPM and Nginx compiled from source. Apache v2.0.63 ApacheBench Nginx + PHP-FPM Server Software: nginx/0.7.67 Server Hostname: test2.com Server Port: 80 Concurrency Level: 25 ---> Time taken for tests: 158.510008 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 ---> Requests per second: 6.31 [#/sec] (mean) Time per request: 3962.750 [ms] (mean) Time per request: 158.510 [ms] (mean, across all concurrent requests) Transfer rate: 181.38 [Kbytes/sec] received ApacheBench Apache using mod_php Server Software: Apache/2.0.63 Server Hostname: test1.com Server Port: 80 Concurrency Level: 25 --> Time taken for tests: 63.556663 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 --> Requests per second: 15.73 [#/sec] (mean) Time per request: 1588.917 [ms] (mean) Time per request: 63.557 [ms] (mean, across all concurrent requests) Transfer rate: 103.94 [Kbytes/sec] received

    Read the article

  • Set scheduled task last result to 0x0 manually

    - by Rogier
    Every night a task runs that checks if any scheduled task has a Last Result is not equal to 0x0. If a scheduled tasks has an error like 0x1, then automatically an e-mail is sent to me. As some tasks are running only weekly, and sometimes an error occurs which results in not equal to 0x0, every night an e-mail is sent with the error message, as the Last Result column still shows the last result of 0x1. But I would like to set the Last Result column to 0x0 manually if I solved a problem, so I won't get every night an e-mail with the error message. So is it possible to set the scheduled tasks Last Result to 0x0 manually (or by a script)? @harrymc. See located script underneath that is sending the e-mail. I can easily add a criteria to ignore result 0x1 (or another code), however this is not the solution as most of the times this result is a real error and has to be e-mailed. set YourEmailAddress=to@email.com set SMTPServer=SMTPserver set PathToScript=c:\scripts set FromAddress=from@email.com for /F "delims=" %%a in ('schtasks /query /v /fo:list ^| findstr /i "Taskname Result"') do call :Sub %%a goto :eof :Sub set Line=%* set BOL=%Line:~0,4% set MOL=%Line:~38% if /i %BOL%==Task ( set name=%MOL% goto :eof ) set result=%MOL% echo Task Name=%name%, Task Result=%result% if not %result%==0 ( echo Task %name% failed with result %result% > %PathToScript%\taskcheckerlog.txt bmail %PathToScript%\taskcheckerlog.txt -t %YourEmailAddress% -a "Warning! Failed %name% Scheduled Task on %computername%" -s %SMTPServer% -f %FromAddress% -b "Task %name% failed with result %result% on CorVu scheduler %computername%" )

    Read the article

  • litespeed issue with content-type

    - by sandeep.s85
    I am running magento with litespeed. The problem I am facing is that ajax call is being made of which header is set as x-json, but lightspeed is setting another header of text/html content type I've checked that page with apache and everything is working fine. I checked the response headers with apache and litespeed and here are they: With apache: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 05:58:47 GMT Server: Apache Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 06:58:48 GMT; path=/; domain=mydomain.com; httponly Connection: close Transfer-Encoding: chunked Content-Type: application/x-json With litespeed: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 06:10:55 GMT Server: LiteSpeed Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 07:10:55 GMT; path=/; domain=mydomain.com; httponly Content-Type: text/html; charset=UTF-8 Content-Length: 474 Vary: User-Agent I've also added application/json to mime.properties of litespeed,restarted it but that did not work. Here is the screenshot

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin admin@dot.com ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: Index of / Index of / Links/ ..... .... But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • Using npm install as a MS-Windows system account

    - by Guss
    I have a node application running on Windows, which I want to be able to update automatically. When I run npm install -d as the Administrator account - it works fine, but when I try to run it through my automation software (that is running as local system), I get errors when I try to install a private module from a private git repository: npm ERR! git clone [email protected]:team/repository.git fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! Error: Command failed: fatal: Could not change back to 'C:/Windows/system32/config/systemprofile/AppData/Roaming/npm-cache/_git-remotes/git-bitbucket-org-team-repository-git-06356f5b': No such file or directory npm ERR! npm ERR! at ChildProcess.exithandler (child_process.js:637:15) npm ERR! at ChildProcess.EventEmitter.emit (events.js:98:17) npm ERR! at maybeClose (child_process.js:735:16) npm ERR! at Socket.<anonymous> (child_process.js:948:11) npm ERR! at Socket.EventEmitter.emit (events.js:95:17) npm ERR! at Pipe.close (net.js:451:12) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! <[email protected]> npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-d" npm ERR! cwd D:\nodeapp npm ERR! node -v v0.10.8 npm ERR! npm -v 1.2.23 npm ERR! code 128 Just running git clone using the same system works fine. Any ideas?

    Read the article

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

  • Postfix Whitelist before recipient restrictions

    - by GruffTech
    Alright. Some background. We have an anti-spam cluster trucking about 2-3 million emails per day, blocking somewhere in the range of 99% of spam email from our end users. The underlying SMTP server is Postfix 2.2.10. The "Frontline defense" before mail gets carted off to SpamAssassin/ClamAV/ ect ect, is attached below. ...basic config.... smtpd_recipient_restrictions = reject_unauth_destination, reject_rbl_client b.barracudacentral.org, reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.mailspike.net, check_policy_service unix:postgrey/socket ...more basic config.... As you can see, standard RBL services from various companies, as well as a Postgrey service. The problem is, I have one client (out of thousands) who is very upset that we blocked an important email of theirs. It was sent through a russian freemailer who was currently blocked in two of our three RBL servers. I explained the situation to them, however they are insisting we do not block any of their emails. So i need a method of whitelisting ANY email that comes to domain.com, however i need it to take place before any of the recipient restrictions, they want no RBL or postgrey blocking at all. I've done a bit of research myself, http://www.howtoforge.com/how-to-whitelist-hosts-ip-addresses-in-postfix seemed to be a good guide at first, almost fixing my problem, But i want it to accept based on TO address, not originating server.

    Read the article

  • Is there a way to determine which service (in svchost.exe) does an outgoing connection?

    - by fluxtendu
    I'm redoing my firewall configuration with more restrictive policies and I would like to determine the provenance (and/or destination) of some outgoing connections. I have an issue because they come from svchost.exe and go to web content/application delivery providers - or similar: 5 IP in range: 82.96.58.0 - 82.96.58.255 --> Akamai Technologies akamaitechnologies.com 3 IP in range: 93.150.110.0 - 93.158.111.255 --> Akamai Technologies akamaitechnologies.com 2 IP in range: 87.248.194.0 - 87.248.223.255 --> LLNW Europe 2 llnw.net 205.234.175.175 --> CacheNetworks, Inc. cachefly.net 188.121.36.239 --> Go Daddy Netherlands B.V. secureserver.net So is it possible to know which service does a particular connection? Or what's your recommendation about the rules applied to these ones? (Comodo Firewall & Windows 7) Update: netstat -ano & tasklist /svc help me a little but they are many services in one svchost.exe so it's still an issue. moreover the service names returned by "tasklist /svc" are not easy readable. (All the connections are HTTP (port 80) but I don't think it's relevant)

    Read the article

  • Make Google Chrome's address bar prefer page titles to domain names when offering completions?

    - by Ryan Thompson
    I've recently switched from Firefox to Chrome, and the thing I miss most from Firefox is the "Awesome Bar" that suggests completions for what I type primarily based on page titles, and then secondarily based on domain names. Chrome offers both matching URLs and titles, just like like Firefox, but Chrome seems to always prefer a matching domain name over a matching page title or a match to another part of the URL (besides the domain), no matter how many times I pass over the former for the latter. In fact, Chrome also prefers to suggest a search rather than matching anything other than a domain name. So is there any hidden preference I can change to tell Chrome that I care more about page titles than domain names? Example: I want to go to Google Reader, so I press Control+L and begin typing "reader". The URL for google reader is http://www.google.com/reader/view/#overview-page, so the domain name is www.google.com, which does not contain the word "reader". So the first option that Chrome suggests is either another site that has "reader" as part of the domain, or a search for "reader" with the default search engine. No matter how many times I scroll down and select Google Reader, Chrome never "learns" that that's what I want.

    Read the article

  • CentOS OpenVZ fail to boot after kernel update

    - by SkechBoy
    After upgrading to latest OpenVZ kernel CentOS server won't boot. When i try go boot the latest kernel server is stuck at this point: (note that images are taken from virtual kvm) http://i.stack.imgur.com/4lusz.jpg Then i try to start the server on some old kernels and than i get this error message: kernel panic - not syncing - attempted to kill init better shown on this image: http://i.stack.imgur.com/2SReF.jpg Here is some useful information fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2995.7 GB, 2995739688960 bytes 255 heads, 63 sectors/track, 364211 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c4e4 Device Boot Start End Blocks Id System /dev/sda1 1 523 4199044+ 82 Linux swap / Solaris /dev/sda2 524 785 2104515 83 Linux /dev/sda3 786 261869 2097157230 83 Linux /dev/sda4 261870 364211 822062115 83 Linux /etc/fstab proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/sda1 none swap sw 0 0 /dev/sda2 /boot ext3 defaults 0 0 /dev/sda3 / ext3 defaults 0 0 /dev/sda4 /home ext3 defaults 0 0 and grub config file: title OpenVZ (2.6.18-274.18.1.el5.028stab098.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.18.1.el5.028stab098.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.18.1.el5.028stab098.1.img title OpenVZ (2.6.18-274.7.1.el5.028stab095.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.7.1.el5.028stab095.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.7.1.el5.028stab095.1.img title OpenVZ (2.6.18-194.8.1.el5.028stab070.4) root (hd0,1) kernel /vmlinuz-2.6.18-194.8.1.el5.028stab070.4 ro root=/dev/sda3 vga=0x317 initrd /initrd-2.6.18-194.8.1.el5.028stab070.4.img Any help is greatly appreciated Thanks.

    Read the article

  • How do I install the evaluation version of Windows Server 2012R2 VHD within a Windows Server 2008R2 Hyper-V system?

    - by Paul Hale
    I have a windows server 2008R2 running hyper-v. I have downloaded the Windows Server 2012RC DC Version from here... http://technet.microsoft.com/en-us/evalcenter/dn205286.aspx I am "forced" to install a download app that copy's a .vhd file to my chosen directory. The instructions on this page... http://technet.microsoft.com/library/dn303418.aspx say... To install the VHD Download the VHD file. Start Hyper-V Manager. On the Action menu, select Import Virtual Machine. Navigate to the directory that the virtual machine file was extracted to and select the directory (not the directory where the VHD file is located). Select the Copy the virtual machine option. Confirm that the import was successful by checking Hyper-V Manager. Configure the network adapter for the resulting virtual machine: right-click the virtual machine and select Settings. In the left pane, click Network Adapter. In the menu that appears, select one of the network adapters of the virtualization server, and then click OK. Start the virtual machine. Where it says "Navigate to the directory that the virtual machine file was extracted to and select the directory (not the directory where the VHD file is located). Select the Copy the virtual machine option." Well nothing has been extracted as far as I can tell? and if it has, I have no idea where or what im looking for? I tried creating a new VM and using the downloaded .vhd file but I got an error saying that the .vhd file is an incompatible format. Can anybody help me out please? Thanks, Paul

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • best-practices to block social sites

    - by adopilot
    In our company we have around 100 workstation with internet access, And day by day situation getting more worst and worst from perspective of using internet access for the purpose of doing private jobs, and wasting time on social sites. Open hearted I am not for blocking sites like Facebook, Youtube, and others similar but day by day my colleagues do not finishing his tasks and while I looking at their monitor all time they are ruining IE or Mozilla and chat and things like that. In other way Ill like to block youtube sometime when We have very poor internet access speed, Here is my questions: Do other companies blocking social sites ? Do I need dedicated device for that like hardware firewall, super expensive router Or I can do that whit my existing FreeBSD 6.1 self made router with two lan cards and configured nat to act like router. I was trying do that using ipfw and routerfirewall but without success, My code looks like ipfw add 25 deny tcp from 192.168.0.0/20 to www.facebook.com ipfw add 25 deny udp from 192.168.0.0/20 to www.facebook. ipfw add 25 deny tcp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny udp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny tcp from 192.168.0.0/20 to www.youtube. ipfw add 25 deny udp from 192.168.0.0/20 to www.youtube.com

    Read the article

  • IIS Manager - Connect to Another Server (Win7 to Win2008 server)

    - by Matt
    I am running Windows 7 Ultimate. If I open up IIS Manager, I see a list of "connections" on the left hand side. In previous versions, I would be able to select an option to "connect to another server" or "connect to another machine", but there is no such option visible anywhere here. The only thing in the list is my local machine. Even in the address bar, if I manually type in the server location (\servername, even tried just servername), nothing happens (it just reverts back to my current local computer) The documentation at http://technet.microsoft.com/en-us/library/cc732466%28WS.10%29.aspx seems to imply the very same steps... but there is just no button or menu option anywhere to do this. Am I missing something? I'm not even seeing a grayed out menu option. EDIT: Under the "File" menu, I see 2 options: Save Connections (grayed out) Exit Under the "Connections" pane, I see 1 button, grayed out. When I hover the mouse, it simply says "Up", appears to be usable if I browse into an element in my local computers IIS settings If I right click inside the pane itself, I see Refresh Add website (to the current host) Start Stop Rename Switch to Content View UPDATE: I downloaded and installed the Remote Server Administration tools from http://www.microsoft.com/downloads/details.aspx?FamilyID=7D2F6AD7-656B-4313-A005-4E344E43997D&displaylang=en, and I enabled everything listed under "Remote Server Administration Tools" under "Turn Windows Features On or Off". Still nothing.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Swapping Function (Fn) and Control (Ctrl) Keys on Lenovo ThinkPad W500

    - by Howiecamp
    I'd like to swap the Fn and Ctrl keys on my ThinkPad W500 (like many others!). I wanted to comment on http://superuser.com/questions/35228/how-can-i-switch-the-function-and-control-keys-on-my-laptop and StackOverflow question 514781 (please Google it because I don't have enough rep to include 2 hyperlinks) but I don't have enough rep to do so to comment. Numerous folks (in both the above questions and on other Google searches) indicate that Windows doesn't register the Fn key as a keypress but using a tool that gives the ASCII value of a keypress (visit www mihov com / eng / am.html) I see the Fn key returning FF (perhaps FF in this case means 'not registered'). I also see that keys like Ctrl register with one ASCII code when pressed alone and another when pressed in combo with another key. Fn will only register when pressed alone, so Windows definitely isn't seeing the combo. This took a solution like AutoHotKey off the table. I ran KeyTweak (which shows you the hardware scan codes of a keypress and the Fn key registerd as 57443). Using this program I remapped Fn to the Ctrl key; this worked perfectly. However, I suspect that because of the issue in #1, the combo of, for example, Fn + C did not execute a copy. Short of retraining my pinky I'm actually considering removing the keyboard and resoldering the connections to swap those keys. I'd love to get some input as to the root technical issue(s) and possible solutions here. Thanks

    Read the article

  • Nginx Not Passing URL Parameters

    - by jmccartie
    Messing around with Nginx ... for some reason, it looks like none of my URL parameters are being passed. My homepage loads fine, but a URL like "http://mysite.com/more.php?id=101" throws errors, saying that the ID is an undefined index. I'm assuming this is something basic I'm missing in a conf file. Some info: conf.d/virtual.conf server { listen 80; server_name dev.mysite.com; index index.php; root /var/www/dev.mysite.com_html; location / { root /var/www/dev.mysite.com_html; } location ~ \.php(.*)$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /var/www/wap/dev.mysite.com_html/$fastcgi_script_n ame; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Error log: 2009/06/22 11:44:21 [notice] 16319#0: start worker process 16322 2009/06/22 11:44:28 [error] 16320#0: *1 FastCGI sent in stderr: "PHP Notice: Undefined index: id in /var/www/dev.mysite.com_html/more.php on line 10 Thanks in advance.

    Read the article

  • grepping a substring from a grep result

    - by allentown
    Given a log file, I will usually do something like this: grep 'marker-1234' filter_log What is the difference in using '' or "" or nothing in the pattern? The above grep command will yield many thousands of lines; what I desire. Within those lines, There is usually one chunk of data I am after. Sometimes, I use awk to print out the fields I am after. In this case, the log format changes, I can't rely on position exclusively, not to mention, the actual logged data can push position forward. To make this understandable, lets say the log line contained an IP address, and that was all I was after, so I can later pipe it to sort and unique and get some tally counts. An example may be: 2010-04-08 some logged data, indetermineate chars - [marker-1234] (123.123.123.123) from: [email protected].com to [email protected].com [stat-xyz9876] The first grep command will give me many thousands of lines like the above, from there, I want to pipe it to something, probably sed, which can pull out a pattern within, and print only the pattern. For this example, using an the IP address would suffice. I tried. Is sed not able to understand [0-9]{1,3}. as a pattern? I had to [0-9][0-9][0-9]. which yielded strange results until the entire pattern created. This is not specific to an IP address, the pattern will change, but I can use that as a learning template. Thank you all.

    Read the article

  • Nginx multiple upstream servers on the same domain via diferent url

    - by Barry
    Hello. I am trying to rout trafic to different upstream servers (that serve different applications and not for load balancing). The incoming trafic has the same domain name but different URL. Here is an example of my configuration: http { upstream backend1 { server 127.0.0.1:8080 fail_timeout=0; server 127.0.0.1:8081 fail_timeout=0; } upstream backend2 { server 127.0.0.1:8090 fail_timeout=0; server 127.0.0.1:8091 fail_timeout=0; } server { listen 80; server_name my_server.com; root /home/my_server; location /serve_me { fastcgi_pass backend1; include fastcgi_params; } location / { fastcgi_pass backend2; include fastcgi_params; } } } It seems that whatever trafic comes in (including "my_server.com/serve_me") goes to backend2. How do I make queries that start with /serve_me to be directed to backend1? Thanks, Barry.

    Read the article

  • How have I locked me out from my Ubuntu VPS?

    - by Sanoj
    I have a Ubuntu Server as VPS (OpenVZ), and yesterday I installed php-fpm, but I guess something went wrong with the installation. Because since then I cannot log in to my server over SSH with PuTTY or using WinSCP. The message I get when connecting is Network error: Connection timed out. Immediately after the installation I was not able to use emacs either, I had to re-install it with apt-get install emacs. I have tried with clearing the firewall and rebooting the server from my web-based "control panel", but it doesn't help. The commands I used for installation of the PHP-fpm was from Installing PHP 5.3, Nginx And PHP-fpm On Ubuntu/Debian. And I guess it has something to do with these commands: cd /tmp wget http://us.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb53_1.6.dfsg.4~beta1-5ubuntu2_i386.deb wget http://us.archive.ubuntu.com/ubuntu/pool/main/i/icu/libicu38_3.8-6ubuntu0.2_i386.deb sudo dpkg -i *.deb sudo echo "deb http://php53.dotdeb.org stable all" >> /etc/apt/sources.list sudo apt-get update sudo apt-get install php5-cli php5-common php5-suhosin sudo apt-get install php5-fpm php5-cgi The web-sites that are hosted from my server works fine. Anyone that have the same experience or know how this could happen? I guess that I have to re-install Ubuntu Server from my "control panel" now, but I would like to avoid this situation in the future. Finally, I have backup on everything so nothing is lost if I have to re-install the machine.

    Read the article

  • postfix says mail sent ok, message does not arrive in ISPs inbox? no reject in log?

    - by Nick
    When I send a test message from my mail server to my @bellsouth.net email, The postfix log shows it was sent OK, but the message never arrives in my bellsouth inbox. Shouldn't I get a failure notice or a bounce if At&T is blocking the messages? I'm trying to troubleshoot why some customers aren't getting emails, but if there's nothing in mail.log to say the message is rejected, how do I know which messages were delivered successfully? The log shows: Feb 27 09:02:36 MyHOSTNAME postfix/pickup[26175]: D53A72713E5: uid=0 from=<root> Feb 27 09:02:36 MyHOSTNAME postfix/cleanup[26487]: D53A72713E5: message-id=<[email protected]> Feb 27 09:02:36 MyHOSTNAME postfix/qmgr[5595]: D53A72713E5: from=<[email protected]>, size=878, nrcpt=1 (queue active) Feb 27 09:02:37 MyHOSTNAME postfix/smtp[26490]: D53A72713E5: to=<[email protected]>, relay=gateway-f1.isp.att.net[204.127.217.16]:25, delay=0.57, delays=0.11/0.03/0.23/0.19, dsn=2.0.0, status=sent (250 ok ; id=20120227140036M0700qer4ne) Feb 27 09:02:37 MyHOSTNAME postfix/qmgr[5595]: D53A72713E5: removed The AT&T server accepted the message, right? I happen to have an At&T/Bellsouth email, but I don't have an account with every ISP we send to. I need some way of knowing if a message is getting to its destination or not. Is there any setting in my main.cf file that would affect whether or not we get reject/bounce notices?

    Read the article

  • What Wireless Router/ADSL Modem to get? N-band a must!!

    - by JJarava
    I'm looking for a Dual-N band Router OR ADSL Gateway and I'd like some recommendations. Situation: I have a 802.11b/g ADSL gateway provided by my telco, but the WIFI signal won't cover all the house (especially the living-room, so my tv-connected Mac Mini has poor to no internet access). So I'm looking to either replace the DSL modem with a N-enabled one, or to add a Router to the mix. I've had a modem+router setup for many years, and I know the advantatges (double NAT, double FW = more security) and issues (more complex to troubleshoot, two possible points of failure), so I'd rather live with a single (ADSL Gateway) device, if possible. Requirements: Dual-N Band (300 Mbs WIFI) 1 GB Ethernet ports ADSL2+ support (if it's a ADSL gateway, which would be desirable) "Best" range and speed possible Nice to have: USB port to share disks/printers on the network Media streaming I've been a long time user of Linksys, so googling around I found the WRT610N (http://www.linksysbycisco.com/US/en/products/WRT610N) for a "Pure Router" perspective, and it's one of those that Linksys styles "N++" (http://www.linksysbycisco.com/US/en/promo/Promotion-Go-Wireless?stepname=Promotion-Step-Go-Wireless-High-Performance) But I haven't been able to find similar "ADSL" gateways. I've found the WAG320N, but there is little to no info in the Linksys site (i.e., i don't know if it's Dual Band, or if it has GB ethernet) Any opinions/recommendations of other products/suggestions are more than welcome.

    Read the article

  • Liferay - Verify each node in a cluster

    - by Schrute
    In this example, I have two clustered instances of Liferay using bundled Tomcat running, using cluster link and shared documents. Let's say the name of the public community is fubar and friendy URL used is fubar.lipsum.com. Let's say the ports listening on each server is 8080. If I go to both server1:8080 or server2:8080 I will get the default page for Liferay. How can I test fubar.lipsum.com on each node by using the backend server, so I can verify each server? If I test it, it just goes to the load balancer, I wish there was a way to append to the backend connection to bring it up. I can add the friendly URL to my local machines hosts file and this seems to kinda work, but then once something is called in the application, it tries to go out again from the backend server and then uses SSL and then we have problems. I think I may be able to do port forwarding, but this seems like a basic thing we should be able to do and what I've found so far in the admin docs has not helped. Using the option to print the server name in the page details isn't an option either.

    Read the article

< Previous Page | 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689  | Next Page >