Search Results

Search found 55168 results on 2207 pages for 'http header'.

Page 295/2207 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • How do I get basic ProxyPass to work on Apache 2.2.17?

    - by Ansis Malins
    I'm trying to get around the ERR_UNSAFE_PORT restriction in Chrome by making Apache reverse proxy other HTTP servers on the machine. I load mod_proxy with sudo e2enmod proxy I add ProxyPass /znc/ http://localhost:6667/ to my httpd.conf I restart Apache with sudo /etc/init.d/apache2 restart When I open up /znc/, I get 500 Internal Server Error. I added LogLevel debug, restarted apache, tried again, and got nothing suspicous: [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 0 in child 21528 for worker http://localhost:6667/ [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 0 in child 21528 for (localhost) [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 1 in child 21528 for worker proxy:reverse [Fri Oct 19 18:55:17 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 1 in child 21528 for (*) [Fri Oct 19 18:55:17 2012] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.8 configured -- resuming normal operations [Fri Oct 19 18:55:17 2012] [info] Server built: Feb 14 2012 17:59:20 [Fri Oct 19 18:55:17 2012] [debug] prefork.c(1018): AcceptMutex: sysvsem (default: sysvsem) [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 0 in child 21532 for worker http://localhost:6667/ [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1837): proxy: worker http://localhost:6667/ already initialized [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 0 in child 21532 for (localhost) [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 1 in child 21532 for worker proxy:reverse [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1837): proxy: worker proxy:reverse already initialized [Fri Oct 19 18:55:22 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 1 in child 21532 for (*) So I'm stumped at this point. What to do? I'm running Ubuntu Server 11.10. ZNC responds with a correct 200 OK and HTML when queried directly both from the local machine and the Internet.

    Read the article

  • Email arrived in SPAM no matter I do SPF, DKIM, and others stuffs

    - by Xjet
    During a full day I tried to removed my email from SPAM (in google). So I start from scratch by instaling Postfix on debian, setup SPF and DKIM. Email stay in spam but header are here. So I continue to set up DMARC. So far so good. Here is my last header : Delivered-To: h********[email protected] Received: by 10.224.84.20 with SMTP id h20csp148174qal; Tue, 3 Jun 2014 01:16:22 -0700 (PDT) X-Received: by 10.112.148.165 with SMTP id tt5mr6432900lbb.61.1401783381908; Tue, 03 Jun 2014 01:16:21 -0700 (PDT) Return-Path: <[email protected]> Received: from bcp.monconcours.com ([188.226.227.141]) by mx.google.com with ESMTP id ue3si38630125lbb.3.2014.06.03.01.16.21 for <h********[email protected]>; Tue, 03 Jun 2014 01:16:21 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 188.226.227.141 as permitted sender) client-ip=188.226.227.141; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 188.226.227.141 as permitted sender) [email protected]; dkim=pass header[email protected]; dmarc=pass (p=NONE dis=NONE) header.from=bcp.monconcours.com Received: by bcp.monconcours.com (Postfix, from userid 33) id 9EA90614F2; Tue, 3 Jun 2014 08:16:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=monconcours.com; s=mail; t=1401783380; bh=IHAmfgk+Ge5iunMmbPMRKPHJrHsCmMebmJkS/G3zk7w=; h=To:Subject:From:To:Reply-To:Date; b=w/cIlRwSFhNS0TIKJj6yd2R3PeKDkkSf/ht2x4FV4l1jOlgsEwsXN8m4aJQMO0uCA hG4AOUgIGAlCoP5qrgLGtRYgjVbKXmHY0cjMxUvbVDKI0xymzSxzuPqoIXWD3COe+v +W57zmEFcq93pJvDUivJzgIWbYFy6SRWe495ups0= To: h*****[email protected] Subject: Creads.fr vous remercie de votre visite, Buissness Angel pour 3 million X-PHP-Originating-Script: 0:testmail.php From: "Banque BCP - Concours photo #teamportugal" <[email protected]> To: hu*****[email protected] Reply-To: "Banque BCP - Concours photo #teamportugal" <[email protected]> MIME-Version: 1.0 Content-Type: multipart/alternative;boundary=np538d84549a709 Content-Transfer-Encoding: 8bit Organization: Creads Digital X-Priority: 3 X-Mailer: PHP5.4.4-14+deb7u9 Message-Id: <[email protected]> Date: Tue, 3 Jun 2014 08:16:20 +0000 (UTC) This is a MIME encoded message. --np538d84549a709 Content-type: text/plain;charset=utf- I've also noticed a warn log for opendmarc : warning: connect to Milter service inet:127.0.0.1:8893: Connection refused But it seems that DMARC pass anyway... I've setup the correct DNS for DKIM and SPF, domain name or ip is not blacklisted. I've test on http://www.mail-tester.com/web-rMZjFj&reloaded=12 Most things seems ok but I can't fix the Reverse DNS issue (I don't have access to the main server). I begin to be pretty annoyed by the problem that's why I need expert advice/help.

    Read the article

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • 9000+ different subdomains 301 to main domain, .htacess apache

    - by Karim
    I bought a domain that had various subdomains such as Kim.domain.com/whatever john.domain.com/whatever1 Lizo.domain.com/whatever2 Simon.domain.com/whatever1 And this was in the thousands, and also had links to these pages I'd like to do a 301 redirect for all these urls into http://domain.com Any idea how this could be done? This is for a apache web server and needs to be done via .htaccess I have implemented the solution from reading the answer below. RewriteEngine On RewriteCond %{HTTP_HOST} !^www. domain.com$ RewriteCond %{HTTP_HOST} !^$ RewriteRule ^/(.*)$ http:/ / www. domain.com/$1 [L,R=301] However I have a slight problem, I would like to redirect all subdomains + subfolders to http://www. domain.com/ With the exception of http: //domain. com/subfolder/, in which case I would like to redirect to http: // www.domain. com/subfolder/ [i.e. exception for no subdomain] I'm guessing I need to add an exception, what can I do to implement this. Note: example URLs above have had spaces added to them to prevent spam blocks for blocking the post.

    Read the article

  • How can I fix a broken AVI file?

    - by KeyboardMonkey
    I have a broken AVI, it won't play in VLC, Xine or MPlayer. I tried Handbrake (reads the file and resets the source to None), OggConvert, Avidemux and mencoder fail to read the file, I cannot seem to reencode this file. I suspect the header info is corrupt, is there a way to get the a/v streams out with a missing header?

    Read the article

  • approx via inetd is not open to connection for others machines

    - by Cédric Girard
    I have an approx server to speed up Debian apt updates, on my Ubuntu 11.04 desktop PC, it had ran fine in the past, but today le 9999 port is open from localhost, but not for others PC. I have not modified inetd configuration at all. What can I check and try? inetd.conf 9999 stream tcp nowait approx /usr/sbin/approx /usr/sbin/approx approx.com # Here are some examples of remote repository mappings. # See http://www.debian.org/mirror/list for mirror sites. debian http://ftp2.fr.debian.org/debian security http://security.debian.org/debian-security volatile http://volatile.debian.org/debian-volatile # The following are the default parameter values, so there is # no need to uncomment them unless you want a different value. # See approx.conf(5) for details. $cache /espace/Dossiers/approx $max_rate unlimited $max_redirects 5 $user approx $group approx $syslog daemon $pdiffs true $offline false $max_wait 10 $verbose false $debug false I tried to allow others PC to connect with a "ALL: ALL" in hosts.allow. ufw is disabled, iptables-save is empty.

    Read the article

  • Set up simple reverse proxy using IIS

    - by Ropstah
    I would like to reverse proxy my Jira installation on a Windows server 2008 machine. Jira is running under: http://jira.domain.com:8080/ and is accessible as such. The machine also runs IIS for hosting several ASP.NET websites. I followed instructions here: http://blogs.iis.net/carlosag/archive/2010/04/01/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr.aspx and installed URL rewrite and ARR. I now have a “Web farm” node in my IIS instance but I’ve got no idea on how to proceed. I tried adding some rules but this made the rest of my IIS websites stop responding. Is there a simple way to say: 1. Forward http://jira.domain.com to http://localhost:8080 2. Ignore other domains and route them as usual Any help is greatly appreciated!

    Read the article

  • I run Webmin and I want it to be accessed with two URLs, both using proxypass in apache

    - by user36644
    This is what I am trying to do: NameVirtualHost * <VirtualHost *> ServerName testsite.org ServerAdmin [email protected] DocumentRoot /var/www/ </VirtualHost> <VirtualHost *> ServerName panel.testsite.org ProxyPass / http://panel.testsite.org:10000/ ProxyPassReverse / http://panel.testsite.org:10000/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ </VirtualHost> The problem is that it won't accept the 2nd vhost with the IP 12.34.56.78 because it says one already exists. panel.newsite.com and newsite.com have the same IP...so I am not sure how I can make it so that only the URL "panel.newsite.com" will get proxypassed to port 10000 but no other URL on newsite.com

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • apache mod_proxy or mod_rewrite for hide a root of a webserver behind a path

    - by Giovanni Nervi
    I have 2 apache 2.2.21 one external and one internal, I need to map the internal apache behind a path in external apache, but I have some problems with absolute url. I tried these configurations: RewriteEngine on RewriteRule ^/externalpath/(.*)$ http://internal-apache.test.com/$1 [L,P,QSA] ProxyPassReverse /externalpath/ http://internal-apache.test.com/ or <Location /externalpath/> ProxyPass http://internal-apache.test.com/ ProxyPassReverse http://internal-apache.test.com/ </Location> My internal apache use absolute path for search resources as images, css and html and I can't change it now. Some suggestions? Thank you

    Read the article

  • Connectors for Sharepoint Federated Search

    - by TobyEvans
    Hi there, we're setting up Federated Search on our intranet, and this blog: http://blogs.blackmarble.co.uk/blogs/adawson/archive/2008/08/01/sharepoint-federated-search.aspx indicates that there is an on-line gallery for searching other external sources, eg Yahoo The link for the gallery is: http://go.microsoft.com/fwlink/?LinkID=95798, which initally led to: http://www.microsoft.com/enterprisesearch/en/us/search-connectors.aspx but which now gets redirected to: http://sharepoint.microsoft.com/en-us/buy/Pages/Editions-Comparison.aspx?Capability=Search which isn't what I was looking for at all ... Does anybody know what's happened here/let us have a nice Yahoo connector? thanks Toby

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • Live Screencast under Linux

    - by OmnipotentEntity
    I was having some difficulty with running a Live Screencast under Linux. I've found jtvlc and tried using that, but whenever I use it the stream comes out either blank or lagged with extremely high latency. I have a fast internet connection and a fast computer, but am I perhaps taxing it too much? Any ideas on what I could possibly be doing wrong? # 1. Get an account on http://www.justin.tv/ # 2. Copy streaming key from: http://www.justin.tv/broadcast/adv_other # 2. Install VLC: http://www.videolan.org/vlc/ # 3. Get Win/Mac/Lin Stream Client: \ # http://apiwiki.justin.tv/mediawiki/index.php/Linux_Broadcasting_API # 4. Adjust the vlc parameters to your liking and run VLC like this #!/bin/bash cvlc screen:// --input-slave=pulse:// \ --screen-width 1920 \ --screen-height 1080 \ --screen-fps 5 \ -v input_stream \ --sout='#duplicate{ dst="transcode{ scale=1, venc=x264{ keyint=60 }, vcodec=h264, vb=600, acodec=mp4a, ab=32, channels=2, samplerate=22050 } :rtp{dst=127.0.0.1,port=1234,sdp=file:///tmp/vlc.sdp} "}' \ --sout-transcode-threads=4 & sleep 2 # 5. Run JTVLC to stream like this: ./jtvlc/jtvlc omnipotententity censored /tmp/vlc.sdp # Notes: #- If you want to see what you're about to stream add 'dst=display, ' # before 'dst="transcode[' # More about the VLC parameters: http://wiki.videolan.org/Documentation:Modules/screen

    Read the article

  • Can't Install php5-dev on Ubuntu 12.04 running OpenVZ

    - by MEsch
    I'm trying to fetch the php apc package using pecl and running into a problem that I believe may be caused by OpenVZ. To do so I need php5-dev. When I try to install it via apt-get, I get this: php5-dev : Depends: libssl-dev but it is not going to be installed Depends: libtool (>= 2.2) but it is not going to be installed As I try to manually install dependencies (without success), I believe I've identified libc6-dev as the culprit. libc6-dev : Depends: libc6 (= 2.15-0ubuntu10.2) but 2.15-0ubuntu10+openvz0 is to be installed I have libc6 installed on the system. If it's any help here is my sources.list: deb http://archive.ubuntu.com/ubuntu precise main restricted universe deb http://archive.ubuntu.com/ubuntu precise-updates main restricted universe deb http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb http://archive.canonical.com/ubuntu precise partner This is a very frustrating problem, as I have other instances of Ubuntu 12.04 running just fine elsewhere (though not on OpenVZ).

    Read the article

  • Getting ajaxterm to work on Debian Lenny

    - by Kevin Duke
    I've been knocking my head out for a while on this one and have read many tutorials, but I just can't get this to work. Ajaxterm, is a webbased SSH client; once installed apt-get install ajaxterm and then enabling it with /etc/init.d/ajaxterm start I should be able to access the SSH terminal with http://mywebsite:8022/ But doing so only gives me a "Page not found", any suggestions? My actual VPS is: http://173.244.205.160 My sources: https://secure.kitserve.org.uk/content/setting-ajaxterm-Ubuntu-and-Debian-powerpc http://wiki.kartbuilding.net/index.php/ajaxterm

    Read the article

  • don't know how this virtual directory structure was setup on iis6

    - by deostroll
    Our development server has a setup as follows: \\DEVSRVR\WEBSITES\COMMON +---include Here is where all css and script files resides. They are required by various web applications \\DEVSRVR\WEBSITES\TESTING\SAM +---Backup ¦ +---bin +---bin +---help Here is where an application resides. Suppose there is an aspx page under the folder called SAM, we'd normally issue an http request as follows: http://testing.apps/sam/default.aspx We believe that testing.apps virtual name points to \\devsrvr\websites\testing folder. Suppose there is a css file called menu.css inside common/include. We'd simply have to make the following http call to get it: http://testing.apps/common/include/menu3.css This works!!! I don't understand how? There is no such folder called common inside of testing...

    Read the article

  • Authenticate Teamcity against LDAP using StartTLS

    - by aseq
    I am running a 6.5 version of Teamcity on a Debian Squeeze server and I use OpenLDAP to authenticate users. I know I can use LDAPS to be able to use encrypted password authentication, however this has been deprecated by the OpenLDAP developers, see: http://www.openldap.org/faq/data/cache/605.html I would like to know if there is a way to configure LDAP authentication in Teamcity to use StartTLS on port 389. I can't find anything about it here: http://confluence.jetbrains.net/display/TCD65/LDAP+Integration Or here: http://therightstuff.de/2009/02/02/How-To-Set-Up-Secure-LDAP-Authentication-With-TeamCity.aspx

    Read the article

  • Unable to get ejabberd prebind to work

    - by cdecker
    I'm trying to get the prebind of BOSH sessions to work. I want to be able to authenticate a user in my CMS and then log him in when he accesses the chat, for this I found https://github.com/smokeclouds/http_prebind, it all works find and I was able to compile it with the following steps: rake configure sed -i 's/AUTH_USER/a_user/g' src/http_prebind.erl sed -i 's/AUTH_PASSWORD/a_password/g' src/http_prebind.erl sed -i 's/EJABBERD_DOMAIN/jabber.my.tld/g' src/http_prebind.erl rake build rake install And then adding the http request bindings to the configuration: {5280, ejabberd_http, [ {request_handlers, [ {["http-prebind"], http_prebind} ]}, %%captcha, http_bind, http_poll, http_prebind, web_admin ]} ]}. As far as I understand it I should now be able to simply request a new session like this: curl -u a_user:a_password http://jabber.my.tld:5280/http-prebind/some_user But no matter what I always get Unauthorized as response. Any idea about this one? PS: I also tried Mod-Http-Pre-Bind, but as it does not require a password I would prefer to use http_prebind. PPS: Does the user with username AUTH_USER and password AUTH_PASSWORD actually have to exist? I'm currently using an admin account.

    Read the article

  • Redirect some URL requests to CloudFront and the rest direct to the normal server?

    - by indiehacker
    Say I have two types of URL requests that must be handled by my REST API: http://query.restapi.com/image.png?apikey=abc123 http://query.restapi.com/2.0/<apiKey>/resource.json?from=umi.us_census00.state_geometry Is it possible to redirect only some URL requests for static images (ie., regex: *.png?.*) to take advantage of CloudFront's caching and have the rest of the requests go directly to the normal EC2 server (or at least take a speedier indirect route to the normal EC2 server?). Perhaps the added request time for the misses to CloudFront is irrelevant to worry about? Or perhaps my situation is not best to use for CloudFront? I understand I will need to make DNS change where the current URL requests having http://query.restapi.com/some.png?apikey=0123 get redirected to http://d1234.cloudfront.net/some.png, but I am hoping there is some way for just redirecting static .png requests to take advantage of CloudFront?

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

  • Serving Compressed Files Amazon vs Lightty

    - by tike
    We are currently using amazon CloudFront to serve css and according to Amazon itself, Amazon CloudFront can serve both compressed and uncompressed files from an origin server. But while i check compression it shows everything fine in origin server but it shows notcompressed checking in the link with cloudfront. e.g. http://www.port80software.com/tools/compresscheck.asp?url=http%3A%2F%2Fimgsrv.mydomain.com%2Fen-UK%2Fsomething.css it would result with Compression status: (gzip) while with cloudfront http://www.port80software.com/tools/compresscheck.asp?url=http%3A%2F%2hereisit.cloudfront.net%2F%2Fsomething.css Compression status: Uncompressed Origin server is running lighttpd with mod_deflate however, allowed config is: deflate.allowed_encodings = ("bzip2", "gzip", "deflate") [i would think, putting extra allowed encoding wont affect as such.] Here i am clueless, what is the real issue.

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >