Search Results

Search found 76098 results on 3044 pages for 'http gdata youtube com'.

Page 498/3044 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • How can I install VLC on RHEL 6.3?

    - by holddame
    I'm having a problem installing VLC on Red hat 6.3 When I try to use yum install vlc all goes well until it shows me this in the end: Error: Package: vlc-2.0.3-6.el6.x86_64 (linuxtech-release) Requires: libminizip.so.1()(64bit) Error: Package: liblrdf-0.5.0-2.el6.x86_64 (linuxtech-release) Requires: ladspa Error: Package: libffado-2.1.0-0.8.20120325.svn2088.el6.x86_64 (linuxtech-release) Requires: libconfig++.so.8()(64bit) also I can't use yum update I'm running on a 32-bit processor and I don't know what's wrong. ok I'v installed live555 and tried again nothing really happened here is my yum whatprovides *BasicUsageEnviroment `live555-devel-0-0.34.2012.01.25.el6.x86_64 : Development files for live555.com streaming : libraries Repo : linuxtech-release Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.34.2012.01.25.el6.i686 : Development files for live555.com streaming : libraries Repo : linuxtech-release Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.27.2010.04.09.el6.rf.x86_64 : Development files for live555.com streaming : libraries Repo : rpmforge Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.27.2012.02.04.el6.rf.x86_64 : Development files for live555.com streaming : libraries Repo : rpmforge Matched from: Filename : /usr/include/BasicUsageEnvironment

    Read the article

  • Remove trailing slash using redirect directive in vhost

    - by Choy
    I have an issue where urls that end in a "/" after a file name causes css/js to break. I.e., http://www.mysite.com/index.php/ <-- breaks http://www.mysite.com/ <-- OK, only breaks for file names To fix, I tried adding a Redirect 301 directive in the vhost file as such where I'm checking to see if there's an extension with a slash after it: <VirtualHost *:80> ServerName mysite.com Redirect 301 ^(.*?\..+)/$ http://mysite.com/$1 </VirtualHost> The redirect appears to do nothing. Is this an issue with my implementation or is what I'm trying to accomplish not possible with a Redirect 301 in the vhost file?

    Read the article

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

  • website is accessible through dns1. but not with WWW

    - by Pushpendra
    i have a domain and i am using freehostia as a web hoster. In the name server of domain i have registered the name of both servers of freehostia. And in my control panel in the hosted domain section its showing "1 Hosted Domains / 1 Domains Listed".however on clicking on that its showing an error that "The selected domain name has not been registered yet. Please register it from the Domain Manager section first" now whenever i am trying to access my website by using dns1. its accessible but when i am using WWW. its not accessible for example my domain name is example.com if i will type dns1.example.com then my webpage will open but when i type www.example.com its showing "Oops! Google Chrome could not find www.example.com" And for information 24 hours has been passed since i have registered my name servers.

    Read the article

  • Emails Generated From Our Linux Server are Blocked By Our Exchange Server (That Has Barracuda)

    - by Scott
    We have our company website hosted on a Linux machine. It is sending mail via postfix. The emails are working and being sent to all email clients like Gmail. However, we are not receiving the emails on our exchange server. When we look at the logs, we see that the connection is being refused, presumably by the exchange server. postfix/qmgr[11865]: DA6D42FF13: from=<[email protected]>, size=3166, nrcpt=1 (queue active) postfix/smtp[12474]: connect to mail.sanitizeddomain.com[XXX.XXX.XXX.XXX]:25: Connection refused postfix/smtp[12474]: DA6D42FF13: to=<[email protected]>, relay=none, delay=172915, delays=172914/0.03/0.07/0, dsn=4.4.1, status=deferred (connect to mail.sanitizeddomain.com[XXX.XXX.XXX.XXX]:25: Connection refused) We do run Barracuda. We cannot telnet from the linux machine to our mail server b/c we get the same message.

    Read the article

  • Using mozilla firefox with utf-8 addresses (in greek) on mac

    - by Panagiotis
    Very often when I use firefox (any version from 10+) and I type my utf-8 seo url it behaves strangely. For example it randomly cuts the url and adds the url again at whole like this: http://www.mysite.com/????G????S/???? would make it as http://www.mysite.com/????G???http://www.mysite.com/????G????S/???? resulting in converting the url to urlencoded letters and 404 errors. I am using Lion with the latest firefox (yes I have uninstalled it once and reinstalled it).

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • How to create persistent static route on Mac OS X 10.6?

    - by kopobamypa
    I need to add static route on Mac OS. I found good description here Permanent Static Route Mac OS X 10.4.0 and followed the Roark Holz's (roarkh) solution. Now my problem: sometimes this solution works, sometimes does not. When it doesn't work I see these messages after boot in the Console Messages log: 06.05.10 9:34:13 com.apple.launchd[1] *** launchd[1] has started up. *** 06.05.10 9:34:46 com.apple.SystemStarter[30] Adding Static Route to 10.152 06.05.10 9:34:46 com.apple.SystemStarter[30] route: writing to routing socket: Network is unreachable 06.05.10 9:34:46 com.apple.SystemStarter[30] add net 10.152.0.0: gateway 192.168.1.234: Network is unreachable I want to know what is going on. How this kind of problem can be troubleshooted?

    Read the article

  • Can I make a computer connecting via VPN visible to computers within the network it is connecting to

    - by SCdF
    OK, here's the deal: I have a computer (specifically, a MacBook Pro) that is connected to a standard network that is then connected to the big nasty internet. Let's call it foo. It runs a web server on 8084, and so if you were on its local network you could get to this with http://foo:8084/, or http://192.168.1.2:8084/, or whatever. From foo I can VPN into my companies intranet and see a computer on the local company network called bar (another MacBook Pro, incidentally). Is there any way to set this up so that while foo is on the VPN bar can access http://foo:8084/ (or http://x.x.x.x:8084/, or whatever)? (From my limited understanding of how VPNs work I have a sneaking suspicion the answer is no, but it doesn't hurt to ask...)

    Read the article

  • Puzzling TCP performance over 3G / UMTS

    - by lemonsqueeze
    I'm using 3G as my primary internet connection, and TCP over this thing is getting more puzzling every day. For example: Downloading from kernel.org is crazy fast: $wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.8.tar.bz2 increases to ~500kB/s after a few secs ! Some servers are incredibly slow, for instance www.graphic-pc.com:Same thing, downloading a big file with wget it starts at ~30kB/s for a split second, then collapses to 5-10k or even worse. Web browsing is decent but somewhat unreliable. Randomly, a page will take really long to load or even fail to load, but a reload can succeed almost immediately. Now, by chance i started playing with OpenVPN over UDP on top of the 3G connection, and OMG suddenly everything's extremely fast !Same www.graphic-pc.com now shoots at 100-200kB/s ! What's going on here ??? How come it is so much better with the VPN than without ?? And why does graphic-pc.com crawl when kernel.org flies ?Something to do with my tcp stack (or the server), or some buggy router in between ?? Notes: Setup is laptop running Ubuntu Lucid and a Huawei 3G dongle (So direct pppd connection). I can reproduce this pretty much any time during the day and I'm not moving, so it's clearly not cell environment or internet congestion. (although kernel.org without VPN sometimes does worse in the evening, 60kB or so - but still 500kB with VPN !) For 2) wireshark shows retransmitted packets, dup ack's, even out of order sometimes. I've tried playing with different /proc/sys/net/ipv4 parameters (tcp_rmem, window_scaling, tcp_congestion...) doesn't seem to make a difference. Update: Tried under windows 7 (no VPN) with some interesting results: tcp settings : default tcp_optimizer kernel.org : 10 kB/s 20 kB/s graphic-pc.com: 8 kB/s 70 kB/s ! tcp_optimizer turned on ctcp among other things. Have to check what os graphic-pc.com is running, my bet is linux's tcp_westwood and ms ctcp don't mix well here...

    Read the article

  • CentOS - Yum doesn't update anymore?

    - by Xanathos
    I've been trying to use yum now, but for some reason, not even the search work anymore. I even tried putting packages I already downloaded in the search criteria and is the same. [root@AMDFX03 Downloads]# yum search glibc Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile epel/metalink | 22 kB 00:00 * base: centos.secrel.com.br * epel: archive.linux.duke.edu * extras: centos.secrel.com.br * rpmforge: apt.sw.be * updates: centos.secrel.com.br adobe-linux-x86_64/primary | 1.2 kB 00:00 http://linuxdownload.adobe.com/linux/x86_64/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from adobe-linux-x86_64: [Errno 256] No more mirrors to try. This error always appear no matter what I do. Please, can you tell me how to fix this, or at least how to reset yum's configuration?

    Read the article

  • htaccess for subdomain help

    - by Patrick
    Usually I just use the online tools for url mod_rewrite rules but this just wouldn't work. Dynamic url: http://sub.domain.com/index.php?page=index&name=test Rewritten url: http://sub.domain.com/test OR http://sub.domain.com/test/ My htaccess: RewriteRule ^([^/]+)/?$ index.php?page=index&name=$1 [L] Instead of passing "test" for the variable name, I always get the value "index.php" Anyone gurus has have any idea?

    Read the article

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

  • Sending mails via Mutt and Gmail: Duplicates

    - by Chris
    I'm trying to setup mutt wiht gmail for the first time. It seems to work pretty well, however when I send a mail from Mutt i appears twice in Gmail's sent folder. (I assume it's also sent twice - I'm trying to validate that) My configuration (Stripped of coloring): # A basic .muttrc for use with Gmail # Change the following six lines to match your Gmail account details set imap_user = "XX" set smtp_url = "[email protected]@smtp.gmail.com:587/" set from = "XX" set realname = "XX" # Change the following line to a different editor you prefer. set editor = "vim" # Basic config, you can leave this as is set folder = "imaps://imap.gmail.com:993" set spoolfile = "+INBOX" set imap_check_subscribed set hostname = gmail.com set mail_check = 120 set timeout = 300 set imap_keepalive = 300 set postponed = "+[Gmail]/Drafts" set record = "+[Gmail]/Sent Mail" set header_cache=~/.mutt/cache/headers set message_cachedir=~/.mutt/cache/bodies set certificate_file=~/.mutt/certificates set move = no set include set sort = 'threads' set sort_aux = 'reverse-last-date-received' set auto_tag = yes hdr_order Date From To Cc auto_view text/html bind editor <Tab> complete-query bind editor ^T complete bind editor <space> noop # Gmail-style keyboard shortcuts macro index,pager y "<enter-command>unset trash\n <delete-message>" "Gmail archive message" macro index,pager d "<enter-command>set trash=\"imaps://imap.googlemail.com/[Gmail]/Bin\"\n <delete-message>" "Gmail delete message" macro index,pager gl "<change-folder>" macro index,pager gi "<change-folder>=INBOX<enter>" "Go to inbox" macro index,pager ga "<change-folder>=[Gmail]/All Mail<enter>" "Go to all mail" macro index,pager gs "<change-folder>=[Gmail]/Starred<enter>" "Go to starred messages" macro index,pager gd "<change-folder>=[Gmail]/Drafts<enter>" "Go to drafts" macro index,pager gt "<change-folder>=[Gmail]/Sent Mail<enter>" "Go to sent mail" #Don't prompt on exit set quit=yes ## ================= #Color definitions ## ================= set pgp_autosign

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    i have exported a pair of cookies from firefox that are valid for the URL in question and tried accessing/downloading the protected content off that addr., but the end result is a return to the login page. i have tried doing about the same thing for 3 other websites with similiar outcome. any clues as to what might i be doing wrong? the syntax i'm using: wget --load--cokies=FILE URL DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org = 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • xampp - can access control panel, cannot access projects/sites on local network

    - by Peter O.
    I've configured xampp and firewall so I can access desktop pc's localhost over my local network through desktop pc's IP. But I'm not able to access auctual projects: I can access: http://192.168.x.x/xampp or http://192.168.x.x/phpMyAdmin But I cannot access: http://192.168.x.x/myWebsite/ I get an error: Server error We're sorry! The server encountered an internal error and was unable to complete your request. Please try again later. error 500

    Read the article

  • I can't get my Macbook Pro to print to an IP addressable printer

    - by Pieter
    Running Macbook Pro OS X 10.6.3 Accessing an HP OfficeJet 5610 plugged in the USB port of a US Robotics router. I tried several combinations of: Protocol: Internet Printing Protocol (IPP) Line Printer Daemon (LPD) HP JetDirect (socket) Address: http://192.168.1.10:1631/printers/HP5610 192.168.1.10:1631/printers/HP5610 http://192.168.1.10:1631 192.168.1.10:1631 192.168.1.10 ... Driver: HP OfficeJet 5600 Series Whenever I try to print, it fails while saying "connected to printer" or "Printer is busy...zill try again in X seconds" Both Windows 7 and Windows XP computers on the network can successfully access this printer, identifying it as "HP5610 on http://192.168.1.10:1631/" I tried clearing all tasks and printers (ctrl-click in the menu), and resetting it to (socket, http://192.168.1.10:1631/printers/HP5610, HP5600 series) but to no success.

    Read the article

  • .htaccess, mod_rewrite Issue

    - by Shoaibi
    What i want: Force www [works] Restrict access to .inc.php [works] Force redirection of abc.php to /abc/ Removal of extension from url Add a trailing slash if needed old .htaccess : Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteRule ^(.*)/$ /$1.php [L,R=301] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.example.net/$1/ [R=301,L] </IfModule> New .htaccess: Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{REQUEST_FILENAME} -f RewriteRule (.*)\.php$ /$1/ [L,R=301] #### Map pseudo-directory to PHP file RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule (.*) /$1.php [L] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule (.*) $1/ [L,R=301] </IfModule> errorlog: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.example.net/ Rewrite.log: http://pastebin.com/x5PKeJHB

    Read the article

  • Unable to install vlc and mplayer after update on fedora 18

    - by mahesh
    I just updated fedora 18 using # yum update Then if I try # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm warning: /var/tmp/rpm-tmp.0K5pWw: Header V3 RSA/SHA256 Signature, key ID 172ff33d: NOKEY error: Failed dependencies: system-release >= 19 is needed by rpmfusion-free-release-19-1.noarch So I tried installing vlc from development version, # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm warning: /var/tmp/rpm-tmp.WZC0gw: Header V3 RSA/SHA256 Signature, key ID 6446d859: NOKEY error: Failed dependencies: system-release >= 21 is needed by rpmfusion-free-release-21-0.1.noarch There's no system release after 20. What does this mean?

    Read the article

  • Multiple SSH private keys for the same host

    - by Sencha
    How can I store 2 different private SSH keys for the same host? I have tried 2 entries in /etc/ssh/ssh_config for the same host with the different keys, and I've also tried to put both keys in the same file and referencing it from one hosts setting, however both do not work. More detail: I'm running Ubuntu server (12.04) and I want to connect to GitHub via SSH to download the latest source for my projects. There are multiple projects running on the same server and each project has a GitHub repo with it's own unique deloyment key-pair. So the host is always the same (github.com) but the keys need to be different depending on which repo I'm using. Different /etc/ssh/ssh_config versions I have tried: Host github.com IdentityFile /etc/ssh/my_project_1_github_deploy_key StrictHostKeyChecking no Host github.com IdentityFile /etc/ssh/my_project_2_github_deploy_key StrictHostKeyChecking no and this with both keys in the same file: Host github.com IdentityFile /etc/ssh/my_project_github_deploy_keys StrictHostKeyChecking no I've had no luck with either. Any help would be greatly appreciated!

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • Is it possible to add wildcard serveralias to virtualhost without modifying httpd.conf manually?

    - by Favourite Chigozie Onwuemene
    Is it possible to add wildcard serveralias (example: *.somesite.com) in an apache server without modifying httpd.conf manually? I use a DNS different from my hosting server and i have added asterisk A record to my DNS to point all request like (test.somesite.com,test2.somesite.com) to my hosting servers IP, but i don't see anyway of adding asterisk serveraliases to apache httpd.conf file in my cpanel. Pls is there a solution?

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >