Search Results

Search found 76098 results on 3044 pages for 'http gdata youtube com'.

Page 485/3044 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • Extract segments from FLV file?

    - by overtherainbow
    Hello I'm no expert with editing videos, and I need to rip a few segments from an FLV file, and then upload some of them to YouTube. I don't know if I need to convert FLV to AVI for YouTube to accept them. I've taken a look at VirtualDub + the FLV plug-in but the video isn't displayed correctly. I also tried AviDemux, but I couldn't find how to extract a segment after setting the A/Start and B/End points. Does someone know of a good solution to do this on Windows? Thank you.

    Read the article

  • Unusual Apache->Tomcat caching issue.

    - by iftrue
    Right now, I have an Apache setup sitting in front of Tomcat to handle caching. This setup has been given to an external service to manage, and since the transition, I've noticed odd behavior. Specifically, when I request a swf file from the web server, I hit the Apache cache (good), but occasionally I'll receive a truncated file. Once I receive this truncated file, the cache will NOT refresh until I manually delete the cache and let the swf pull down from tomcat again. The external service claims that the configuration is fine, but I don't see any way this could be happening aside from improper configuration. Now, there are two apache and two tomcat servers under a load balancer, and occasionally one apache cache will break while another does not (leading to 50% of all requests getting bad, truncated data). Where should I start looking to debug this issue? What could POSSIBLY be causing this odd behavior? Edit: Inspecting the logs, tomcat throws this: java.io.IOException: Bad file number at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:199) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.FilterInputStream.read(FilterInputStream.java:90) at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:1968) at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:1714) at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:809) at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:325) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:209) at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:347) at org.terracotta.modules.tomcat.tomcat_5_5.SessionValve55.invoke(SessionValve55.java:57) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:619) followed by access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:00:27:32 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:27:33 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:39:53 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:02:27:38 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - So apache is caching the bad file size. What could possibly be causing this, and possibly separate, how do I ensure that this exception does not get written to cache?

    Read the article

  • How do I determine whether this email bounce is my fault?

    - by David Zaslavsky
    I use Google Apps to handle email for my personal website, so I have an email address [email protected] through that, and I also have a Gmail account username@gmail.com. Now, I've been trying to send emails to a particular recipient who shall be known as mail@example.com. When I send the email from my Gmail account with the @gmail.com address, it works fine. However, when I send it from my Google Apps account with the @ellipsix.net address, I get a bounce message which includes the following text: Delivery to the following recipient failed permanently: mail@example.com Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 mail server permanently rejected message (#5.3.0) (state 17). The bounce message suggests that it is up to the mail administrator of the recipient domain example.com to fix the problem, whatever it is. But I would like to be as sure as possible that nothing needs to be fixed on my end. I already have DKIM signatures enabled for my domain, and I have published an SPF DNS record. Is there something else I should check or do, or can I be confident that it's up to the recipient to fix this issue? Does the "state 17" in the bounce message mean something relevant? I've included my domain name in the question so people who know more than me about this stuff can independently check the relevant DNS records or other information. This other question seems similar, but I've already investigated everything suggested in the answers there (except for contacting Google, which I don't want to do unless I suspect it's their issue to fix).

    Read the article

  • Apache reverse proxy access control

    - by Steven
    I have an Apache reverse proxy that is currently reverse proxying for a few sites. However i am now going to be adding a new site (lets call it newsite.com) that should only be accessible by certain IP's. Is this doable using Apache as a reverse proxy? I use VirtualHosts for the sites that are being proxyied. I have tried using the Allow/Deny directives in combination with the Location statements. For example: <VirtualHost *:80> Servername newsite.com <Location http://newsite.com> Order Deny,Allow Deny from all Allow from x.x.x.x </Location> <IfModule rewrite_module> RewriteRule ^/$ http://newsite.internal.com [proxy] </IfModule> I have also tried configuring allow/deny specicaily for the site in the Proxy directives, for example <Proxy http://newsite.com/> Order deny,allow Deny from all Allow from x.x.x.x </Proxy> I still have this definition for the rest of the proxied sites however. <Proxy *> Order deny,allow Allow from all </Proxy> No matter what i do it seems to be accessible from any where. Is this because of the definition for all other proxied sites. Is there an order to which it applies Proxy directives. I have had the newsite one both before and after the * one, and also within the VirtualHost statement.

    Read the article

  • how to notify a program of another program? dll? directory? path?

    - by Brady Trainor
    I am trying to experiment with GNUS email in Emacs, in Windows (EDIT: x64 bit). I've got it to work in Ubuntu, but struggling with it in Windows. From http://www.gnu.org/software/emacs/manual/html_mono/emacs-gnutls.html#Help-For-Users I read in second paragraph: This is a little bit trickier on the W32 (Windows) platform, but if you have the GnuTLS DLLs (available from http://sourceforge.net/projects/ezwinports/files/ thanks to Eli Zaretskii) in the same directory as Emacs, you should be OK. I have downloaded and unzipped the gnutls-3.0.9-w32-bin package, but am not sure what to do with it. I have tried putting it in Program Files (x86), which is "the same directory as Emacs". I have tried putting it in the emacs-24.3 folder. I consider merging all the folders in between the two, but am hesitant as that seems a difficult troubleshoot attempt compared to my knowledge on these matters. I think Emacs needs to somehow see the gnutls binaries and/or dlls. My knowledge is limited on this. I've also struggled to understand PATHs for sometime now, and am not sure if that approach is relevant here. FYI, the emacs directory contains folders labeled bin, etc, info, leim, lisp and site-lisp. The gnutls directory contains folder labeled bin, include, lib and share. Hmm, now I'm finding lots of links on adding paths. Still, I'm skeptical that I would only add gnutls.exe path, as it seems the dlls are needed. Some additional data for Ramhound's first comment I have been attempting the (require 'gnutls) route. This seems to be the most relevant parts in the log: Opening connection to imap.gmail.com via tls... gnutls.c: [1] (Emacs) GnuTLS library not found Opening TLS connection to `imap.gmail.com'... Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com'...failed Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com --protocols ssl3'...failed Opening TLS connection with `openssl s_client -connect imap.gmail.com:993 -no_ssl2 -ign_eof'...failed Opening TLS connection to `imap.gmail.com'...failed I am not sure what "in stallion" means. Emacs seems to have installed itself in program files (x86), so I assume it is 32 bit. I can try and figure out how to double check, but did not realize I would get such fast response time, and am headed out right now. I will try merging the files later tonight?

    Read the article

  • nginx won't respond to monit

    - by Miko
    Although EngineX is running, monit can't seem to figure it out. Here's my monit log: [PDT Apr 13 02:19:19] error : HTTP error: Server returned status 400 [PDT Apr 13 02:19:19] error : 'nginx' failed protocol test [HTTP] at INET[localhost:80] via TCP [PDT Apr 13 02:19:19] info : 'nginx' trying to restart [PDT Apr 13 02:19:19] info : 'nginx' stop: /etc/init.d/nginx [PDT Apr 13 02:19:20] info : 'nginx' start: /etc/init.d/nginx The monitrc file contains the following configuration: if failed port 80 protocol http and request '/ping.txt' # check for response with timeout 20 seconds then restart I can access the file through lynx http://localhost:80/ping.txt without any problems. Why would monit have trouble requesting the file when nginx is running just fine?

    Read the article

  • Apache Mod_rewrite rule working on one server, but not another

    - by Mason
    I am using mod_jk and mod_rewrite on httpd 2.2.15. I have a rule.... RewriteCond %{REQUEST_URI} !^/video/play\.xhtml.* RewriteRule ^/video/(.*) /video/play.xhtml?vid=$1 [PT] I just want to rewrite something like /video/videoidhere to /video/play.xhtml?vid=videoidhere This works perfectly on my developer machine, but on production I get a 404 (generated by Jboss, not Apache). here is the tail of access.log and rewrite.log on prod (broken). the rewrite.log is exactly the same on dev(working) applying pattern '^/video/(.*)' to uri '/video/46279d4daf5440b2844ec831413dcc3b' RewriteCond: input='/video/46279d4daf5440b2844ec831413dcc3b' pattern='!^/video/play\.xhtml.*' => matched rewrite '/video/46279d4daf5440b2844ec831413dcc3b' -> '/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b' split uri=/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b -> uri=/video/play.xhtml, args=vid=46279d4daf5440b2844ec831413dcc3b forcing '/video/play.xhtml' to get passed through to next API URI-to-filename handler "GET /video/46279d4daf5440b2844ec831413dcc3b HTTP/1.1" 404 420 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.6) Gecko/20100628 Ubuntu/10.04 (lucid) Firefox/3.6.6" I can access http://www.fivi.com/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b but not /video/46279d4daf5440b2844ec831413dcc3b Both server are even using the EXACT same httpd.conf, and modules. I built Apache with... ./configure --prefix /usr/local/apache2.2.15 --enable-alias --enable-rewrite --enable-cache --enable-disk_cache --enable-mem_cache --enable-ssl --enable-deflate Thanks, Mason ----UPDATE---- -mod-jk.conf JkWorkersFile /usr/local/apache2.2.15/conf/workers.properties JkLogFile /var/log/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T" JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> -workers.properties worker.node1.port=8009 worker.node1.host=75.102.10.74 worker.node1.type=ajp13 worker.node1.lbfactor=20 worker.node1.ping_mode=A #As of mod_jk 1.2.27 worker.node2.port=8009 worker.node2.host=75.102.10.75 worker.node2.type=ajp13 worker.node2.lbfactor=10 worker.node2.ping_mode=A #As of mod_jk 1.2.27 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node2,node1 worker.loadbalancer.sticky_session=True worker.status.type=status -httpd.conf ServerName www.fivi.com:80 Include /usr/local/apache2.2.15/conf/mod-jk.conf NameVirtualHost * <VirtualHost *> ServerName * DocumentRoot /usr/local/apache2/htdocs JkUnMount /* loadbalancer RedirectMatch 301 /(.*) http://www.fivi.com/$1 </VirtualHost> <VirtualHost *> ServerName www.fivi.com ServerAlias www.fivi.com images.fivi.com JkMount /* loadbalancer JkMount / loadbalancer [root@fivi conf]# /usr/local/apache2.2.15/bin/httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) cache_module (static) disk_cache_module (static) mem_cache_module (static) include_module (static) filter_module (static) deflate_module (static) log_config_module (static) env_module (static) headers_module (static) setenvif_module (static) version_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) status_module (static) autoindex_module (static) asis_module (static) cgi_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) jk_module (shared) Syntax OK

    Read the article

  • Firefox addon (for firefox 3.5) to monitor web usage

    - by user8120
    I am looking for a firefox addon that would tell me where I have spent how much time browsing. I came across quite a few addons but they are either not supported in 3.5 or they are no longer supported or cannot be installed. I work on ubuntu linux (9.04) and Shiretoko (Firefox 3.5). I need a solution for this environment. I need stats like Website Time spent (hh:mm) % (day) % (week) %(month) www.stackoverflow.com 20:00 90 xx yy www.google.com 1:35 x www.theserverside.com 80:23 x www.facebook.com 200:30 x

    Read the article

  • Apache - setting up a subdomain

    - by Adam
    I'm having trouble getting a subdomain working for an Apache Linux Install. Following is what I've configured: DNS: connect.goneglobal.com. CNAME 54.251.35.112 Apache httpd.conf: <VirtualHost *:80> DocumentRoot /var/www/html/connect.goneglobal.com ServerName connect.goneglobal.com </VirtualHost> restart httpd - this ip is registered to this server - works for other sites on this apache. (first time I've tried a subdomain). Appears the issue is with DNS potentially and it doesn't seem to get to the site. Note: I have an index.php in the Documentroot. Note: there is an A record for goneglobal.com. which goes to a different hosting provider. thx

    Read the article

  • Cheap and Secure Proxy

    - by jack
    Hi I'm looking for cheap secure proxy providers that support vpn http socks like this one http://www.your-freedom.net/. Because I wish to compare their efficiency. YF(http://www.your-freedom.net/) doesn't provide my satisfaction on speed they provide after purchasing the account. Their try-before-buy account has much more speed than the purchased one. Thanks.

    Read the article

  • Squid Proxy: url_regex acl is not working?

    - by bharathi
    I am using squid proxy 3.1 in ubuntu machine. I want to allow only urls matching our pattern through our proxy server. I configured acl like below. Acl for dstdomain is working fine. If i access any url besides .zmedia.com , I got proxy connection refused. But the url_regex is not working. What i am trying here is. Allow only request from ".zmedia.com" domain and the request url should be in "/blog" context. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 ::1 acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$ acl allowdomain dstdomain .zmedia.com acl Safe_ports port 80 8080 8500 7272 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl SSL_ports port 7272 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager http_access deny !allowdomain http_access allow urlwhitelist http_access allow CONNECT SSL_ports http_access deny CONNECT !SSL_ports # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid append_domain .zmedia.com # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 Please correct me , If i did anything wrong?

    Read the article

  • Internal and External DNS from Different Servers, Same Zone

    - by Shane
    Hello All, I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com, and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS controller that contains a zone for webdomain.com then I can get my specified internal entries but never get anything from the public DNS server. This holds true regardless of the type of DNS server I use also--I have tried both a Linux Bind9 and a Windows 2008 Domain Controller. I guess my big question is: am I being unreasonable to think that a system should be able to check my specified internal DNS and in the case where a requested entry doesn't exist it should fail over to the specified public dns server -OR- is this just not the way DNS works and I am lost in the sauce? It seems like it should be as simple as telling my internal DNS server to forward any requests that it can't fulfill to dotster, but that doesn't seem to work. Could this be a firewall issue? Thanks in advance

    Read the article

  • Problem adding public key for apt

    - by highBandWidth
    I was trying to get the official mongodb for Ubuntu, following the instructions at http://www.mongodb.org/display/DOCS/Ubuntu+and+Debian+packages After adding the deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen line in my sources, I need to add the pgp key since synaptic says W: GPG error: http://downloads-distro.mongodb.org dist Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9ECBEC467F0CEB10 Again following instructions, I did sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 this says Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 7F0CEB10 gpg: requesting key 7F0CEB10 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0 Interestingly, I also get $ apt-key list gpg: fatal: /home/myname/.gnupg: directory does not exist! secmem usage: 0/0 bytes in 0/0 blocks of pool 0/32768 How can I get apt to use this source?

    Read the article

  • Tomcat IIS 7 Integration gives 503 errors on all requests.

    - by Yvan JANSSENS
    Hi, After many attempts to install Tomcat on IIS 7, I finally managed to get it working. At least I think so :-S. I finally got the 500 errors away, by setting the correct permissions. The only thing that doesn't work is ... serving stuff: neither regular stuff (like ASP, HTML files, or directory browsing) or Tomcat things work. Here are my configs: Worker.properties # The workers that your plugins should create and work with # worker.list=worker1 #------ DEFAULT ajp13 WORKER DEFINITION ------------------------------ #--------------------------------------------------------------------- # Defining a worker named ajp13 and of type ajp13 # Note that the name and the type do not have to match. # worker.worker1.port=8009 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 URIWorkerMap.properties /|/*=worker1 # Exclude the subdirectory static: !/static|/*=worker1 # Exclude some suffixes: !*.html=worker1 !*.asp=worker1 Server.xml <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> http://localhost:8080 is working, I can view the apps and configure them there... I'm quite new to IIS 7, I used to work with IIS 6. Thanks in advance, Yvan

    Read the article

  • How do I restart MySQL in monit when page contains specific text?

    - by Tyler
    How do I check if a web page contains the text "Error connecting to database" and if the text exists in the page restart the database? Here's what I have so far but it isn't working: check host website.com with address website.com group database start program = "/usr/bin/service mysql start" stop program = "/usr/bin/service mysql stop" if url http://website.com content == "Error connecting to database" then restart

    Read the article

  • fail2ban with Cloudflare

    - by tatersalad58
    I'm using fail2ban to block web vulnerability scanners. It is working correctly when visiting the site if CloudFlare is bypassed, but a user can still access it if going through it. I have mod_cloudflare installed. Is it possible to block users with IPtables when using Cloudflare? Ubuntu Server 12.04 32-bit Access.log: 112.64.89.231 - - [29/Aug/2012:19:16:01 -0500] "GET /muieblackcat HTTP/1.1" 404 469 "-" "-" Jail.conf [apache-probe] enabled = true port = http,https filter = apache-probe logpath = /var/log/apache2/access.log action = iptables-multiport[name=apache-probe, port="http,https", protocol=tcp] maxretry = 1 bantime = 30 # Test Apache-probe.conf [Definition] failregex = ^<HOST>.*"GET \/muieblackcat HTTP\/1\.1".* ignoreregex =

    Read the article

  • How do I get nginx to issue 301 requests to HTTPS location, when SSL handled by a load-balancer?

    - by growse
    I've noticed that there's functionality enabled in nginx by default, whereby a url request without a trailing slash for a directory which exists in the filesystem automatically has a slash added through a 301 redirect. E.g. if the directory css exists within my root, then requesting http://example.com/css will result in a 301 to http://example.com/css/. However, I have another site where the SSL is offloaded by a load-balancer. In this case, when I request https://example.com/css, nginx issues a 301 redirect to http://example.com/css/, despite the fact that the HTTP_X_FORWARDED_PROTO header is set to https by the load balancer. Is this an nginx bug? Or a config setting I've missed somewhere?

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • Nameservers Won't Work

    - by user39110
    Hi, i have dedicated server and i use vendabilisim.com for nameserver. i have 2 nameservers ns1.vendabilisim.com 213.128.64.91 and ns2.vendabilisim.com 213.128.64.92 ip addresses are working but nameservers don't. Please help.

    Read the article

  • Copying email with qmail and Plesk

    - by Greg
    I need to keep a copy of all outgoing and incoming email (for a single domain if possible) using qmail or Plesk. I can't recompile qmail, so qmailtap is out of the question, as is setting QUEUE_EXTRA in extra.h. I'm pretty sure it should be possible with Plesk's mailmng utility, aka Mail Handlers but I'm having trouble getting them to work. I've registered 2 hooks: incoming hook ./mailmng --add-handler --handler-name=incoming --recipient-domain=example.com --executable=/xxx/incoming.sh --context=/xxx/incoming/ --hook=before-local incoming.sh #!/bin/bash # The email is passed on stdin - grab it to a variable e=`cat -` # $1 = context (/xxx/incoming) # $3 = recipient ([email protected]) # Create /xxx/incoming/test@example.com mkdir -p $1$3 # Save the email to /xxx/incoming/[email protected]/0123456789.txt echo "$e" > $1$3/`date +%s%N`.txt # Echo PASS to stderr echo 'PASS' >&2 # Echo the email to stdout echo "$e" outgoing hook # ./mailmng --add-handler --handler-name=outgoing --sender-domain=holidaysplease.com --executable=/xxx/outgoing.sh --context=/xxx/outgoing/ --hook=before-remote The outgoing.sh file is the same as incoming.sh, except replace $3 (recipient) with $2 (sender). The incoming hook does work, but saves 2 copies of each email - one before and one after SpamAssassin has run. The outgoing hook doesn't seem to get called at all. So finally, my questions are: How can I make the incoming hook save only a single copy (preferably after SpamAssassin has run)? How can I get the outgoing hook to work?

    Read the article

  • IIS6 host multiple websites under same sub-domain (or something similar)

    - by user28502
    I'm trying to figure out a structure for a hosted application that i'm working on. I've got a domain lets call it app.company.com (a sub-domain company.com of course) that is setup to redirect to my IIS 6 web server. I would like to set up one website in IIS for each client that will use this application. And have the URL schema be like this: app.company.com/clientA -- would point to ClientA website in IIS app.company.com/clientB -- would point to ClientB website in IIS Do you guys have any pointers or best practices for my scenario?

    Read the article

  • OSX Menu bar doesn't appear til opening an application

    - by gms8994
    When I boot my MBP, the menu bar doesn't appear. When I open Mail.app or Safari, the menu bar appears. I've searched for a bit, and nothing seems to talk about this. Is there a way to fix this? UPDATE From the Console logs: 3/29/12 7:05:10.037 AM com.apple.launchctl.Aqua: load: option requires an argument -- D 3/29/12 7:05:10.037 AM com.apple.launchctl.Aqua: usage: launchctl load [-wF] [-D <user|local|network|system|all>] paths... 3/29/12 7:05:10.100 AM com.apple.launchd.peruser.501: (com.apple.launchctl.Aqua[153]) Exited with code: 1

    Read the article

  • Nginx access log shows authenticated user "admin"

    - by bearcat
    I came across a line in my Nginx access log: 218.201.121.99 - admin [12/Dec/2012:18:33:18 +0800] "GET /manager/html HTTP/1.1" 444 0 "-" "-" Let me stress that there is only 1 record with this IP. Notice the authenticated user admin. After some googling, I was able to find out only that this is authenticated user (http://wiki.nginx.org/HttpCoreModule#.24remote_user), which was authenticated by the Auth Basic Module (http://wiki.nginx.org/HttpAuthBasicModule). However, nowhere in my site (configuration) do I use HTTP basic authentication. What is going on? How did it get there? Was the user authenticated?

    Read the article

  • What resources are best for staying current about information security?

    - by dr.pooter
    What types of sites do you visit, on a regular basis, to stay current on information security issues? Some examples from my list include: http://isc.sans.org/ http://www.kaspersky.com/viruswatch3 http://www.schneier.com/blog/ http://blog.fireeye.com/research/ As well as following the security heavyweights on twitter. I'm curious to hear what resources you recommend for daily monitoring. Anything specific to particular operating systems or other software. Are mailing lists still considered valuable. My goal would be to trim the cruft of all the things I'm currently subscribed to and focus on the essentials.

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >