Search Results

Search found 25422 results on 1017 pages for 'url stream handler'.

Page 222/1017 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Need help making this code more efficient

    - by Rendicahya
    I always use this method to easily read the content of a file. Is it efficient enough? Is 1024 good for the buffer size? public static String read(File file) { FileInputStream stream = null; StringBuilder str = new StringBuilder(); try { stream = new FileInputStream(file); } catch (FileNotFoundException e) { } FileChannel channel = stream.getChannel(); ByteBuffer buffer = ByteBuffer.allocate(1024); try { while (channel.read(buffer) != -1) { buffer.flip(); while (buffer.hasRemaining()) { str.append((char) buffer.get()); } buffer.rewind(); } } catch (IOException e) { } finally { try { channel.close(); stream.close(); } catch (IOException e) { } } return str.toString(); }

    Read the article

  • How to port real time video service by FMS from PC to mobile phone(symbian,android,iphone)?

    - by wamp
    Now I've set up the flash application to work in to stage: the uploading stage: uploading the stream from pc A's camera to FMS play stage: watch the real time stream from PC B's browser I want to make stage 2 work on mobile phones too. But currently it's using flash(actionscript) to connect and play the stream, which is not supported out of the box. How to port this kind of application to mobile phones?

    Read the article

  • Remove the hash after Ajax loading (I'm ajaxing wordpress 8-) )

    - by Alberto
    Hi everybody, I followed this great tutorial to"ajax" my blog:http://www.deluxeblogtips.com/2010/05/how-to-ajaxify-wordpress-theme.html But it creates some problems and I think the problem is in the hash that ajax creates. So, after the content is loaded, how can I remove the hash from the url? I copy my code here: jQuery(document).ready(function($) { var $mainContent = $("#content"), siteUrl = "http://" + top.location.host.toString(), url = ''; $(document).delegate("a[href^='"+siteUrl+"']:not([href*=/wp-admin/]):not([href*=/wp-login.php]):not([href$=/feed/]):not([href*=/go.php]):not(.comment-reply-link)", "click", function() { location.hash = this.pathname; $('html, body').animate({scrollTop:0}, 'fast'); return false; }); $("#searchform").submit(function(e) { location.hash = '?s=' + $("#search").val(); e.preventDefault(); }); $(window).bind('hashchange', function(){ url = window.location.hash.substring(1); if (!url) { return; } url = url + " #inside"; $mainContent.html('<div id="loader">Caricamento in corso...</div>').load(url, function() { //$mainContent.animate({opacity: "1"}); scriptss(); }); }); $(window).trigger('hashchange'); }); Thank all very much!

    Read the article

  • Unzip a memorystream (Contains the zip file) and get the files

    - by user1621791
    I have a memory stream that contains a zip file in byte[] format . Is there any way I can unzip this memory stream, without any need of writing the file to disk ? In general I am using ICSharpCode.SharpZipLib.Zip.FastZip to unzip a file , But any way to unzip a memory stream ? and store the files in another memorystream or in byte[] format according to the files/folders present in the zip ? Any way I can use the Memorymapped files feature in this scenario ?

    Read the article

  • webview in tabhost

    - by user1905845
    I am new to android. I am using webview inside tabview. In my app in first activity facebook button available.when click facebook button connect to facebook in second activity inside tabs fine.In second activity i am using webview. this is the code. public void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); setContentView(R.layout.facebook); wvFacebook=(WebView)findViewById(R.id.webview); btnBack=(Button)findViewById(R.id.back); txtBarname=(TextView)findViewById(R.id.barnameheader); if(check==1){ strFacebook=MyBarsActivity.strFacebook; Log.e("Facebook url",strFacebook); strBarName=MyBarsActivity.strBarName; } wvFacebook.setWebViewClient(new MyWebViewClient()); wvFacebook.loadUrl(strFacebook); } private class MyWebViewClient extends WebViewClient { public boolean shouldOverrideUrlLoading(WebView view, String url) { view.getSettings().setJavaScriptEnabled(true); view.loadUrl(url); return true; } @Override public void onPageStarted(WebView view, String url, Bitmap favicon) { super.onPageStarted(view, url, favicon); // progressDialog.show(getParent(), "In progress", "Loading, please wait..."); } public void onPageFinished(WebView view, String url) { super.onPageFinished(view, url); } But my problem is when i click connect button facebook page will be displayed well inside tabs.But when i click emial or username and password button it is not responding.Why it is not responding please help me regarding this.thanks in advance.

    Read the article

  • Count of Distinct Int32 Values in .NET

    - by Eric J.
    I am receiving a stream of unordered Int32 values and need to track the count of distinct values that I receive. My thought is to add the Int32 values into a HashSet<Int32>. Duplicate entries will simply not be added per the behavior of HashSet. Do I understand correctly that set membership is based on GetHashCode() and that the hash code of an Int32 is the number itself? Is there an approach that is either more CPU or more memory efficient? UPDATE The data stream is rather large. Simply using Linq to iterate the stream to get the distinct count is not what I'm after, since that would involve iterating the stream a second time.

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • Spring-security not processing pre/post annotations

    - by wuntee
    Trying to get pre/post annotations working with a web application, but for some reason nothing is happening with spring-security. Can anyone see what im missing? web.xml contextConfigLocation /WEB-INF/rvaContext-business.xml /WEB-INF/rvaContext-security.xml <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/log4j.properties</param-value> </context-param> <!-- Spring security filter --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <!-- - Publishes events for session creation and destruction through the application - context. Optional unless concurrent session control is being used. --> <listener> <listener-class>org.springframework.security.web.session.HttpSessionEventPublisher</listener-class> </listener> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> <servlet> <servlet-name>rva</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>rva</servlet-name> <url-pattern>/rva/*</url-pattern> </servlet-mapping> rvaContext-secuity.xml: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"/> <http use-expressions="true"> <form-login /> <logout /> <remember-me /> <!-- Uncomment to limit the number of sessions a user can have --> <session-management invalid-session-url="/timeout.jsp"> <concurrency-control max-sessions="1" error-if-maximum-exceeded="true" /> </session-management> <form-login login-page="rva/login" /> </http> ... LoginController class: @Controller @RequestMapping("/login") public class LoginController { @RequestMapping(method = RequestMethod.GET) public String login(ModelMap map){ map.addAttribute("title", "Login: AD Credentials"); return("login"); } @RequestMapping("/secure") @PreAuthorize("hasRole('ROLE_USER')") public String secure(ModelMap map){ return("secure"); } } In the logs, there is nothing even related to spring-security: logs: INFO: Initializing Spring FrameworkServlet 'rva' INFO [org.springframework.web.servlet.DispatcherServlet] - FrameworkServlet 'rva': initialization started INFO [org.springframework.web.context.support.XmlWebApplicationContext] - Refreshing WebApplicationContext for namespace 'rva-servlet': startup date [Fri Mar 26 10:28:51 MDT 2010]; parent: Root WebApplicationContext INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from ServletContext resource [/WEB-INF/rva-servlet.xml] INFO [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@a2fc31: defining beans [loginController,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,freemarkerConfig,viewResolver]; parent: org.springframework.beans.factory.support.DefaultListableBeanFactory@cc74e7 INFO [org.springframework.web.servlet.view.freemarker.FreeMarkerConfigurer] - ClassTemplateLoader for Spring macros added to FreeMarker configuration INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/secure] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/secure.*] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/secure/] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login.*] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.DispatcherServlet] - FrameworkServlet 'rva': initialization completed in 417 ms Mar 26, 2010 10:28:52 AM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Mar 26, 2010 10:28:52 AM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Mar 26, 2010 10:28:52 AM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/31 config=null Mar 26, 2010 10:28:52 AM org.apache.catalina.startup.Catalina start INFO: Server startup in 1873 ms WARN [org.springframework.web.servlet.PageNotFound] - No mapping found for HTTP request with URI [/rva-web/] in DispatcherServlet with name 'rva'

    Read the article

  • Ivy and Snapshots (Nexus)

    - by Uberpuppy
    Hey folks, I'm using ant, ivy and nexus repo manager to build and store my artifacts. I managed to get everything working: dependency resolution and publishing. Until I hit a problem... (of course!). I was publishing to a 'release' repo in nexus, which is locked to 'disable redeploy' (even if you change the setting to 'allow redeploy' (really lame UI there imo). You can imagine how pissed off I was getting when my changes weren't updating through the repo before I realised that this was happening. Anyway, I now have to switch everything to use a 'Snapshot' repo in nexus. Problem is that this messes up my publish. I've tried a variety of things, including extensive googling, and haven't got anywhere whatsoever. The error I get is a bad PUT request, error code 400. Can someone who has got this working please give me a pointer on what I'm missing. Many thanks, Alastair fyi, here's my config: Note that I have removed any attempts at getting snapshots to work as I didn't know what was actually (potentially) useful and what was complete guff. This is therefore the working release-only setup. Also, please note that I've added the XXX-API ivy.xml for info only. I can't even get the xxx-common to publish (and that doesn't even have dependencies). Ant task: <target name="publish" depends="init-publish"> <property name="project.generated.ivy.file" value="${project.artifact.dir}/ivy.xml"/> <property name="project.pom.file" value="${project.artifact.dir}/${project.handle}.pom"/> <echo message="Artifact dir: ${project.artifact.dir}"/> <ivy:deliver deliverpattern="${project.generated.ivy.file}" organisation="${project.organisation}" module="${project.artifact}" status="integration" revision="${project.revision}" pubrevision="${project.revision}" /> <ivy:resolve /> <ivy:makepom ivyfile="${project.generated.ivy.file}" pomfile="${project.pom.file}"/> <ivy:publish resolver="${ivy.omnicache.publisher}" module="${project.artifact}" organisation="${project.organisation}" revision="${project.revision}" pubrevision="${project.revision}" pubdate="now" overwrite="true" publishivy="true" status="integration" artifactspattern="${project.artifact.dir}/[artifact]-[revision](-[classifier]).[ext]" /> </target> Couple of ivy files to give an idea of internal dependencies: XXX-Common project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_common" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_common" type="jar" ext="jar"/> <artifact name="xxx_common" type="pom" ext="pom"/> </publications> <dependencies> </dependencies> </ivy-module> XXX-API project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_api" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_api" type="jar" ext="jar"/> <artifact name="xxx_api" type="pom" ext="pom"/> </publications> <dependencies> <dependency org="com.myorg.xxx" name="xxx_common" rev="1.0" transitive="true" /> </dependencies> </ivy-module> IVY Settings.xml: <ivysettings> <properties file="${ivy.project.dir}/project.properties" /> <settings defaultResolver="chain" defaultConflictManager="all" /> <credentials host="${ivy.credentials.host}" realm="Sonatype Nexus Repository Manager" username="${ivy.credentials.username}" passwd="${ivy.credentials.passwd}" /> <caches> <cache name="ivy.cache" basedir="${ivy.cache.dir}" /> </caches> <resolvers> <ibiblio name="xxx_publisher" m2compatible="true" root="${ivy.xxx.publish.url}" /> <chain name="chain"> <url name="xxx"> <ivy pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/ivy-[revision].xml" /> <artifact pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="xxx" m2compatible="true" root="${ivy.xxx.repo.url}"/> <ibiblio name="public" m2compatible="true" root="${ivy.master.repo.url}" /> <url name="com.springsource.repository.bundles.release"> <ivy pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <url name="com.springsource.repository.bundles.external"> <ivy pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> </chain> </resolvers> </ivysettings>

    Read the article

  • problem in loading images from web

    - by Lynnooi
    hi, I am new in android and had developed an app which get images from the website and display it. I got it working in emulator but not in real phones. In some device, it will crash or take very long loading period. Can anyone please help me or guide me in improving it as i'm not sure whether the way i loads the images is correct or not. Here are the code i use to get the images from the web and display accordingly. if (xmlURL.length() != 0) { try { URL url = new URL(xmlURL); SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); /* Get the XMLReader of the SAXParser we created. */ XMLReader xr = sp.getXMLReader(); /* * Create a new ContentHandler and apply it to the * XML-Reader */ xr.setContentHandler(myExampleHandler); /* Parse the xml-data from our URL. */ xr.parse(new InputSource(url.openStream())); /* Parsing has finished. */ /* * Our ExampleHandler now provides the parsed data to * us. */ ParsedExampleDataSet parsedExampleDataSet = myExampleHandler.getParsedData(); } catch (Exception e) { } } if (s.equalsIgnoreCase("wallpapers")) { Context context = helloAndroid.this.getBaseContext(); for (int j = 0; j <= myExampleHandler.filenames.size() - 1; j++) { if (myExampleHandler.filenames.elementAt(j).toString() != null) { helloAndroid.this.ed = myExampleHandler.thumbs.elementAt(j) .toString(); if (helloAndroid.this.ed.length() != 0) { Drawable image = ImageOperations(context, helloAndroid.this.ed, "image.jpg"); file_info = myExampleHandler.filenames .elementAt(j).toString(); author = "\nby " + myExampleHandler.authors.elementAt(j) .toString(); switch (j + 1) { case 1: ImageView imgView1 = new ImageView(context); imgView1 = (ImageView) findViewById(R.id.image1); if (image.getIntrinsicHeight() > 0) { imgView1.setImageDrawable(image); } else imgView1 .setImageResource(R.drawable.empty_wallpaper); tv = (TextView) findViewById(R.id.filename1); tv.setText(file_info); tv = (TextView) findViewById(R.id.author1); tv.setText(author); imgView1 .setOnClickListener(new View.OnClickListener() { public void onClick(View view) { // Perform action on click Intent myIntent1 = new Intent( helloAndroid.this, galleryFile.class); Bundle b = new Bundle(); b.putString("fileID",myExampleHandler.fileid.elementAt(0).toString()); b.putString("page", "1"); b.putString("family", s); b.putString("fi",myExampleHandler.folder_id.elementAt(folder).toString()); b.putString("kw", keyword); myIntent1.putExtras(b); startActivityForResult( myIntent1, 0); } }); break; case 2: ImageView imgView2 = new ImageView(context); imgView2 = (ImageView) findViewById(R.id.image2); imgView2.setImageDrawable(image); tv = (TextView) findViewById(R.id.filename2); tv.setText(file_info); tv = (TextView) findViewById(R.id.author2); tv.setText(author); imgView2 .setOnClickListener(new View.OnClickListener() { public void onClick(View view) { // Perform action on click Intent myIntent1 = new Intent( helloAndroid.this, galleryFile.class); Bundle b = new Bundle(); b.putString("fileID",myExampleHandler.fileid.elementAt(1).toString()); b.putString("page", "1"); b.putString("family", s); b.putString("fi",myExampleHandler.folder_id.elementAt(folder).toString()); b.putString("kw", keyword); myIntent1.putExtras(b); startActivityForResult( myIntent1, 0); } }); break; case 3: //same code break; } } } } } private Drawable ImageOperations(Context ctx, String url, String saveFilename) { try { InputStream is = (InputStream) this.fetch(url); Drawable d = Drawable.createFromStream(is, "src"); return d; } catch (MalformedURLException e) { e.printStackTrace(); return null; } catch (IOException e) { e.printStackTrace(); return null; } } public Object fetch(String address) throws MalformedURLException, IOException { URL url = new URL(address); Object content = url.getContent(); return content; }

    Read the article

  • My application crashing Please help me out.

    - by kiran kumar
    My Application get crashing ... its loading data of all the cities... and when i click its displaying my detailed view controller.... when iam getting back from my controller... and selecting another city my application get crashed.. Please help me out. To get idea i am pasting my code. #import "CityNameViewController.h" #import "Cities.h" #import "XMLParser.h" #import "PartyTemperature_AppDelegate.h" #import "CityEventViewController.h" @implementation CityNameViewController //@synthesize aCities; @synthesize appDelegate; @synthesize currentIndex; @synthesize aCities; /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if ((self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])) { // Custom initialization } return self; } */ // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; self.title=@"Cities"; appDelegate=(PartyTemperature_AppDelegate *)[[UIApplication sharedApplication]delegate]; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return [appDelegate.cityListArray count]; } - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { return 95.0f; } - (void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath { cell.accessoryType=UITableViewCellAccessoryDisclosureIndicator; cell.textLabel.textColor = [[[UIColor alloc] initWithRed:0.2 green:0.2 blue:0.6 alpha:1] autorelease]; cell.detailTextLabel.textColor = [UIColor blackColor]; cell.detailTextLabel.font=[UIFont systemFontOfSize:10]; if (indexPath.row %2 == 1) { cell.backgroundColor = [[[UIColor alloc] initWithRed:0.87f green:0.87f blue:0.87f alpha:1.0f] autorelease]; } else { cell.backgroundColor = [[[UIColor alloc] initWithRed:0.97f green:0.97f blue:0.97f alpha:1.0f] autorelease]; } } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:CellIdentifier] autorelease]; cell.selectionStyle= UITableViewCellSelectionStyleBlue; // cell.accessoryType=UITableViewCellAccessoryDisclosureIndicator; cell.backgroundColor=[UIColor blueColor]; } // aCities=[appDelegate.cityListArray objectAtIndex:indexPath.row]; // cell.textLabel.text=aCities.city_Name; cell.textLabel.text=[[appDelegate.cityListArray objectAtIndex:indexPath.row]city_Name]; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath{ //http://compliantbox.com/party_temperature/citysearch.php?city=Amsterdam&latitude=52.366125&longitude=4.899171 NSString *url; aCities=[appDelegate.cityListArray objectAtIndex:indexPath.row]; if ([appDelegate.cityListArray count]>0){ url=@"http://compliantbox.com/party_temperature/citysearch.php?city="; url=[url stringByAppendingString:aCities.city_Name]; url=[url stringByAppendingString:@"&latitude=52.366125&longitude=4.899171"]; NSLog(@"url value is %@",url); [self parseCityName:[[NSURL alloc]initWithString:url]]; } } -(void)parseCityName:(NSURL *)url{ NSXMLParser *xmlParser=[[NSXMLParser alloc]initWithContentsOfURL:url]; XMLParser *parser=[[XMLParser alloc] initXMLParser]; [xmlParser setDelegate:parser]; BOOL success; success=[xmlParser parse]; if (success) { NSLog(@"Sucessfully parsed"); CityEventViewController *cityEventViewController=[[CityEventViewController alloc]initWithNibName:@"CityEventViewController" bundle:nil]; cityEventViewController.index=currentIndex; [self.navigationController pushViewController:cityEventViewController animated:YES]; [cityEventViewController release]; cityEventViewController=nil; } else { NSLog(@"Try it Idoit"); UIAlertView *alert=[[UIAlertView alloc] initWithTitle:@"Alert!" message:@"Event Not In Radius" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; } } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [aCities release]; [super dealloc]; } @end And the error is * Terminating app due to uncaught exception 'NSRangeException', reason: ' -[NSMutableArray objectAtIndex:]: index 1 beyond bounds for empty array' ** Call stack at first throw:

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • LAN->LAN IP translation (for TortoiseSVN + Artifacts + Buffalo router)

    - by Armchair Bronco
    Here's my scenario: I've got a VisualSVN server on my main dev box @ home. I'm also using Visual Studio 2010, TortoiseSVN, VisualSVN client (for source control), and Versioned 'Artifacts' (for bug tracking). (I had to modify the fake URL's below to use only one slash because as a new user, I can't post more than one real URL.) I've got my Buffalo AirStation WHR-HP-G300N router properly configured so my business partner can connect to the SVN server. I have port forwarding enabled for the internet-side IP address (like http:/99.888.77.66:443) which gets forwarded to an internal IP (like 192.168.11.6). This part is working great. The problem I'm having is with the integration piece between TortoiseSVN and my bug tracking system. I need to provide a bugtraq:url property, but I haven't been able to get relative paths to work. So I'm forced to use an absolute URL. On my end, I need to use the name of my server (for example: bugtraq:url = https:/my-server/svn/bla..), but this doesn't work for my partner. He needs to specify the IP address (for example: bugtraq:url = https:/999.888.77.66:443/svn/bla...) Is there a way to configure my router such that the IP address for this parameter gets re-routed/re-mapped to "https://my-server" if the request originates from the LAN itself? My router's software supports LAN-Internet and Internet-LAN, but I don't see LAN-LAN.

    Read the article

  • mod_mono 'Service Temporarily Unavailable' issue

    - by Charlie Somerville
    I've deployed an ASP.NET web application on a Linux (Debian) server running Apache 2.2 and mod_mono 1.9 It's working well, however Mono occasionally segfaults and uses the entire CPU which causes the website to stop working and display 'Service Temporarily Unavailable' Killing mono fixes it, but obviously this isn't a good solution. I tailed the system log after this happened and I saw the following error messages from the kernel: Apr 20 01:49:37 charliesomerville kernel: [1596436.204158] mono[17909]: segfault at b645f671 ip b645f671 sp b4ffb604 error 4<6>mono[19047]: segfault at b645f66e ip b645f66e sp b4bf7604 error 4<6>mono[18017]: segfault at b645f66e ip b645f66e sp b52fe604 error 4<6>mono[19668]: segfault at b645f5e6 ip b645f5e6 sp b48f4604 error 4<6>mono[22565]: segfault at b645f674 ip b645f674 sp b45f1604 error 4<6>mono[17700]: segfault at b645f661 ip b645f661 sp b51fd604 error 4<6>mono[19596]: segfault at b645f5e6 ip b645f5e6 sp b49f5604 error 4 Apr 20 01:49:37 charliesomerville kernel: [1596436.208172] mono[23219]: segfault at b645f66e ip b645f66e sp b44f0604 error 4 At the end of Apache's error.log are the following errors: [Tue Apr 20 03:10:23 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 System.ArgumentNullException: null key Parameter name: key at System.Collections.Hashtable.get_Item (System.Object key) [0x00000] at System.Runtime.Serialization.SerializationCallbacks.GetSerializationCallbacks (System.Type t) [0x00000] at System.Runtime.Serialization.ObjectManager.RaiseOnDeserializingEvent (System.Object obj) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectContent (System.IO.BinaryReader reader, System.Runtime.Serialization.Formatters.Binary.TypeMetadata metadata, Int64 objectId, System.Object& objectInstance, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectInstance (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObject (BinaryElement element, System.IO.BinaryReader reader, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] at System.Runtime.Remoting.Channels.CADSerializer.DeserializeObject (System.IO.MemoryStream mem) [0x00000] at System.Runtime.Remoting.RemotingServices.GetDomainProxy (System.AppDomain domain) [0x00000] at System.AppDomain.CreateDomain (System.String friendlyName, System.Security.Policy.Evidence securityInfo, System.AppDomainSetup info) [0x00000] at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] at Mono.WebServer.ApplicationServer.GetApplicationForPath (System.String vhost, Int32 port, System.String path, Boolean defaultToRoot) [0x00000] at (wrapper remoting-invoke-with-check) Mono.WebServer.ApplicationServer:GetApplicationForPath (string,int,string,bool) at Mono.WebServer.ModMonoWorker.GetOrCreateApplication (System.String vhost, Int32 port, System.String filepath, System.String virt) [0x00000] at Mono.WebServer.ModMonoWorker.InnerRun (System.Object state) [0x00000] at Mono.WebServer.ModMonoWorker.Run (System.Object state) [0x00000] [Tue Apr 20 03:10:26 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:26 2010] [error] Command stream corrupted, last command was -1 Along with the above errors, Apache's error.log is packed with hundreds (if not thousands) of the following error: Maximum number (20) of concurrent mod_mono requests to /tmp/mod_mono_dashboard_default_2.lock reached. Droping request. At the moment, I'm thinking there might be something wrong with configuration here (it's basically running on out-of-the-box config)

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • IIS8 Application request routing

    - by JustDanyul
    Sorry for what is most likely NOT a very intelligent question, but my non-sysadmin brain is struggling to understand what is causing my problem. Basically, I wan't to enable reverse proxying on a IIS8 box. I read though this article: http://www.iis.net/learn/extensions/url-rewrite-module/reverse-proxy-with-url-rewrite-v2-and-application-request-routing And I've installed the ARR extension from here: http://www.iis.net/downloads/microsoft/application-request-routing Now, I enabled the proxying (as explained in the MS tutorial), and I felt the other setting as they where (again, as instructed in the tutorial). My rule looks like the following <rule name="Reverse Proxy to payroll" stopProcessing="true"> <match url="^mytest/(.*)" /> <action type="Rewrite" url="http://localhost:8282/{R:1}" /> </rule> But alas, it doesn't work. If I change it to a "normal" rewrite rule, as in <rule name="Reverse Proxy to payroll" stopProcessing="true"> <match url="^mytest/(.*)" /> <action type="Rewrite" url="/{R:1}" /> </rule> Then it works. So, it must differently be something with the reverse proxy. Any idea what gives?

    Read the article

  • URGENT: IE 6/7/8 problem!- Right Column is not aligned and is pushed down.

    - by Kalpesh Vasta
    Hi Guys, I'm new to this but here goes. I have been developing this website http://www.panelmaster.co.uk and i have managed to solve the majority of design problems but one! If you take a look at the site in IE the right column seems to drop down and is not aligned with the right and centre column. This problem only occurs in IE as upon testing i found it was fine in firefox and safari. I have provided below the CSS for the website. I would appreciate if you guys can help me with the problem asap. Thanks in advance. :) ========================== body { margin: 0; padding: 0; line-height: 1.5em; font-family: Tahoma, Geneva, sans-serif; font-size: 12px; color: #666; background-image: url(images/templatemo_body_top.jpg); background-color: #90857c; background-repeat: repeat-x; background-position: top; text-align: left; } a:link, a:visited { color: #073475; text-decoration: none; font-weight: normal; } a:active, a:hover { color: #073475; text-decoration: underline; } h3 { color: #1e7da9; font-size: 16px; font-weight: bold; } h2 { color: #1e7da9; font-size: 16px; font-weight: bold; } h1 { color: #696969; font-size: 20px; font-weight: bold; } p { margin: 0px; padding: 0px; } img { margin: 0px; padding: 0px; border: none; } .cleaner { clear: both; width: 100%; height: 0px; font-size: 0px; } .cleaner_h30 { clear: both; width:100%; height: 30px; } .cleaner_h40 { clear: both; width:100%; height: 40px; } .float_l { float: left; } .float_r { float: right; } .margin_r20 { margin-right: 20px; } templatemo_body_wrapper { width: 100%; background: url(images/templatemo_body_bottom.png) repeat-x bottom center; } templatemo_wrapper { width: 970px; padding: 0 10px; margin: 0 auto; background: url(images/templatemo_wrapper_top.jpg) no-repeat top center; } /* header */ templatemo_header { clear: both; width: 890px; height: 60px; padding: 20px 40px } templatemo_header #site_title { float: left; padding-top: 15px; } site_title a { font-size: 24px; color: #FFFFFF; font-weight: bold; text-decoration: none; } site_title a:hover { font-weight: bold; text-decoration: none; } site_title a span { display: block; margin-top: 5px; font-size: 14px; color: #fff; font-weight: bold; letter-spacing: 2px; } /* end of header */ /* menu */ templatemo_menu { clear: both; width: 970px; height: 80px; background: url(images/templatemo_menubar.png) no-repeat; } search_box { width: 990px; height: 35px; text-align: right; } search_box form { margin: 0; padding: 5px 40px; } search_box #input_field { height: 20px; width: 300px; color: #000000; font-size: 12px; font-variant: normal; line-height: normal; border: 1px solid #CCCCCC; background: #FFFFFF; } search_box #submit_btn { height: 24px; width: 100px; cursor: pointer; font-size: 12px; text-align: center; vertical-align: bottom; white-space: pre; outline: none; color:#666666; border: 1px solid #CCCCCC; background: #FFFFFF; } templatemo_menu ul { width: 890px; height: 35px; margin: 0; padding: 7px 40px; list-style: none; } templatemo_menu ul li { padding: 0px; margin: 0px; display: inline; } templatemo_menu ul li a { float: left; display: block; margin-right: 40px; font-size: 13px; text-decoration: none; color: #fff; font-weight: normal; outline: none; } templatemo_menu ul li a:hover, #templatemo_menu ul .current { color: #162127; } /* end of menu */ /* contetnt */ templatemo_content_wrapper { clear: both; padding: 0px 0; } templatemo_content { float: left; margin-left: 10px; width: 550px; } banner { margin: 0 0 10px 0; } templatemo_content #content_top { width: 550px; height: 20px; background: url(images/templatemo_content_top.png) no-repeat; } templatemo_content #content_bottom { width: 550px; height: 20px; background: url(images/templatemo_content_bottom.png) no-repeat; } templatemo_content #content_middle { width: 510px; padding: 5px 20px 0px 20px; background: url(images/templatemo_content_middle.png) repeat-y; } content_middle p { text-align: justify; } .templatemo_sidebar_wrapper { width: 200px; } .templatemo_sidebar { width: 197px; padding-right: 3px; background: url(images/templatemo_sidebar_middle.png) repeat-y; } .templatemo_sidebar_top { width: 200px; height: 20px; background: url(images/templatemo_sidebar_top.png) no-repeat; } .templatemo_sidebar_bottom { width: 200px; height: 20px; background: url(images/templatemo_sidebar_bottom.png) no-repeat; } .templatemo_sidebar .sidebar_box { clear: both; padding-bottom: 20px; } .sidebar_box1 { padding: 15px; } .sidebar_box h2 { color: #2d84ad; font-size: 16px; padding-left: 25px; font-weight: bold; margin: 0 0 10px 10px; background: url(images/templatemo_sidebar_h1.jpg) left center no-repeat; } .sidebar_box .sidebar_box_content { padding: 15px; background: url(images/templatemo_sidebar_box_top.png) top repeat-x; } .sidebar_box img { border: 1px solid #999; margin-bottom: 5px; } .sidebar_box .discount { margin: 5px 0 0 0; font-weight: bold; } .sidebar_box .discount span { color: #C00; } .left_sidebar_box .discount a { font-weight: bold; color: #000; } .sidebar_box .categories_list { margin: 0; padding: 0; list-style: none; } .categories_list li { padding: 0; margin: 0; } .categories_list li a { display: block; color: #201f1c; padding: 5px 0 5px 20px; background: url(images/list.png) center left no-repeat; } .categories_list li a:hover { color: #439ac3; text-decoration: none; } .news_box { clear: both; margin-bottom: 5px; padding-bottom: 5px; border-bottom: 1px solid #999; } .news_box h4 { padding: 2px 0; margin: 0; } .news_box h4 a { font-size: 12px; font-weight: normal; color: #1893f2; } newsletter_box label { display: block; margin-bottom: 10px; } newsletter_box .input_field { height: 20px; width: 155px; padding: 0 5px; margin-bottom: 10px; color: #000000; font-size: 12px; font-variant: normal; line-height: normal; } newsletter_box .submit_btn { float: right; height: 30px; width: 80px; margin: 0px; padding: 3px 0 15px 0; cursor: pointer; font-size: 12px; text-align: center; vertical-align: bottom; white-space: pre; outline: none; } .product_box { float: left; width: 223px; padding: 10px; margin-bottom: 20px; border: 1px solid #CCC; text-align: center; } .product_box img { margin-bottom: 10px; } .product_box h3 { color: #2a2522; font-size: 12px; margin: 0 0 10px; } .product_box p { margin-bottom: 10px; } .product_box p span { color: #cf5902; font-size: 14px; font-weight: bold; } .product_box .detail { float: right; } .product_box .addtocard { float: left; font-weight: bold; padding-right: 20px; background: url(images/templatemo_shopping_cart.png) bottom right no-repeat; } /* end of content */ /* footer */ templatemo_footer_wrapper { background: url(images/templatemo_footer.png) repeat-x; } templatemo_footer { width: 910px; height: 85px; padding: 50px 40px 30px 40px; margin: 0 auto; text-align: center; color: #a9a098; } templatemo_footer a { color: #d7d1cc; font-weight: normal; } templatemo_footer a:hover { text-decoration: none; color: #FFFF33; } templatemo_footer .footer_menu { margin: 0 0 30px 0; padding: 0px; list-style: none; } .footer_menu li { margin: 0px; padding: 0 20px; display: inline; border-right: 1px solid #d7d1cc; } .footer_menu li a { color: #d7d1cc; } .footer_menu .last_menu { border: none; } /* end of footer */ /twitter/ twitter_div {border-top: 0px;} twitter_div a {color: #0000ff !important;} twitter_update_list {margin-left: -1em !important; margin-bottom: 0px !important;} twitter_update_list li {list-style-type: none; padding-right: 5px; } twitter_update_list li a {color: #0000ff; padding-right: 5px;} twitter_div {border-bottom: 0px; padding-bottom: 10px; padding-top:6px; padding-right: 5px;} twitter_div a, #twitter_update_list li a {text-decoration: none !important;} twitter_div a:hover, #twitter_update_list li a:hover {text-decoration:underline !important;}

    Read the article

  • Right Hand Column Does Not Align Properly in IE6/7/8

    - by Kalpesh Vasta
    Hi Guys, I'm new to this but here goes. I have been developing this website http://www.panelmaster.co.uk and i have managed to solve the majority of design problems but one! If you take a look at the site in IE the right column seems to drop down and is not aligned with the right and centre column. This problem only occurs in IE as upon testing i found it was fine in firefox and safari. I have provided below the CSS for the website. I would appreciate if you guys can help me with the problem. Thanks in advance. :) ========================== body { margin: 0; padding: 0; line-height: 1.5em; font-family: Tahoma, Geneva, sans-serif; font-size: 12px; color: #666; background-image: url(images/templatemo_body_top.jpg); background-color: #90857c; background-repeat: repeat-x; background-position: top; text-align: left; } a:link, a:visited { color: #073475; text-decoration: none; font-weight: normal; } a:active, a:hover { color: #073475; text-decoration: underline; } h3 { color: #1e7da9; font-size: 16px; font-weight: bold; } h2 { color: #1e7da9; font-size: 16px; font-weight: bold; } h1 { color: #696969; font-size: 20px; font-weight: bold; } p { margin: 0px; padding: 0px; } img { margin: 0px; padding: 0px; border: none; } .cleaner { clear: both; width: 100%; height: 0px; font-size: 0px; } .cleaner_h30 { clear: both; width:100%; height: 30px; } .cleaner_h40 { clear: both; width:100%; height: 40px; } .float_l { float: left; } .float_r { float: right; } .margin_r20 { margin-right: 20px; } #templatemo_body_wrapper { width: 100%; background: url(images/templatemo_body_bottom.png) repeat-x bottom center; } #templatemo_wrapper { width: 970px; padding: 0 10px; margin: 0 auto; background: url(images/templatemo_wrapper_top.jpg) no-repeat top center; } /* header */ #templatemo_header { clear: both; width: 890px; height: 60px; padding: 20px 40px } #templatemo_header #site_title { float: left; padding-top: 15px; } #site_title a { font-size: 24px; color: #FFFFFF; font-weight: bold; text-decoration: none; } #site_title a:hover { font-weight: bold; text-decoration: none; } #site_title a span { display: block; margin-top: 5px; font-size: 14px; color: #fff; font-weight: bold; letter-spacing: 2px; } /* end of header */ /* menu */ #templatemo_menu { clear: both; width: 970px; height: 80px; background: url(images/templatemo_menubar.png) no-repeat; } #search_box { width: 990px; height: 35px; text-align: right; } #search_box form { margin: 0; padding: 5px 40px; } #search_box #input_field { height: 20px; width: 300px; color: #000000; font-size: 12px; font-variant: normal; line-height: normal; border: 1px solid #CCCCCC; background: #FFFFFF; } #search_box #submit_btn { height: 24px; width: 100px; cursor: pointer; font-size: 12px; text-align: center; vertical-align: bottom; white-space: pre; outline: none; color:#666666; border: 1px solid #CCCCCC; background: #FFFFFF; } #templatemo_menu ul { width: 890px; height: 35px; margin: 0; padding: 7px 40px; list-style: none; } #templatemo_menu ul li { padding: 0px; margin: 0px; display: inline; } #templatemo_menu ul li a { float: left; display: block; margin-right: 40px; font-size: 13px; text-decoration: none; color: #fff; font-weight: normal; outline: none; } #templatemo_menu ul li a:hover, #templatemo_menu ul .current { color: #162127; } /* end of menu */ /* contetnt */ #templatemo_content_wrapper { clear: both; padding: 0px 0; } #templatemo_content { float: left; margin-left: 10px; width: 550px; } #banner { margin: 0 0 10px 0; } #templatemo_content #content_top { width: 550px; height: 20px; background: url(images/templatemo_content_top.png) no-repeat; } #templatemo_content #content_bottom { width: 550px; height: 20px; background: url(images/templatemo_content_bottom.png) no-repeat; } #templatemo_content #content_middle { width: 510px; padding: 5px 20px 0px 20px; background: url(images/templatemo_content_middle.png) repeat-y; } #content_middle p { text-align: justify; } .templatemo_sidebar_wrapper { width: 200px; } .templatemo_sidebar { width: 197px; padding-right: 3px; background: url(images/templatemo_sidebar_middle.png) repeat-y; } .templatemo_sidebar_top { width: 200px; height: 20px; background: url(images/templatemo_sidebar_top.png) no-repeat; } .templatemo_sidebar_bottom { width: 200px; height: 20px; background: url(images/templatemo_sidebar_bottom.png) no-repeat; } .templatemo_sidebar .sidebar_box { clear: both; padding-bottom: 20px; } .sidebar_box1 { padding: 15px; } .sidebar_box h2 { color: #2d84ad; font-size: 16px; padding-left: 25px; font-weight: bold; margin: 0 0 10px 10px; background: url(images/templatemo_sidebar_h1.jpg) left center no-repeat; } .sidebar_box .sidebar_box_content { padding: 15px; background: url(images/templatemo_sidebar_box_top.png) top repeat-x; } .sidebar_box img { border: 1px solid #999; margin-bottom: 5px; } .sidebar_box .discount { margin: 5px 0 0 0; font-weight: bold; } .sidebar_box .discount span { color: #C00; } .left_sidebar_box .discount a { font-weight: bold; color: #000; } .sidebar_box .categories_list { margin: 0; padding: 0; list-style: none; } .categories_list li { padding: 0; margin: 0; } .categories_list li a { display: block; color: #201f1c; padding: 5px 0 5px 20px; background: url(images/list.png) center left no-repeat; } .categories_list li a:hover { color: #439ac3; text-decoration: none; } .news_box { clear: both; margin-bottom: 5px; padding-bottom: 5px; border-bottom: 1px solid #999; } .news_box h4 { padding: 2px 0; margin: 0; } .news_box h4 a { font-size: 12px; font-weight: normal; color: #1893f2; } #newsletter_box label { display: block; margin-bottom: 10px; } #newsletter_box .input_field { height: 20px; width: 155px; padding: 0 5px; margin-bottom: 10px; color: #000000; font-size: 12px; font-variant: normal; line-height: normal; } #newsletter_box .submit_btn { float: right; height: 30px; width: 80px; margin: 0px; padding: 3px 0 15px 0; cursor: pointer; font-size: 12px; text-align: center; vertical-align: bottom; white-space: pre; outline: none; } .product_box { float: left; width: 223px; padding: 10px; margin-bottom: 20px; border: 1px solid #CCC; text-align: center; } .product_box img { margin-bottom: 10px; } .product_box h3 { color: #2a2522; font-size: 12px; margin: 0 0 10px; } .product_box p { margin-bottom: 10px; } .product_box p span { color: #cf5902; font-size: 14px; font-weight: bold; } .product_box .detail { float: right; } .product_box .addtocard { float: left; font-weight: bold; padding-right: 20px; background: url(images/templatemo_shopping_cart.png) bottom right no-repeat; } /* end of content */ /* footer */ #templatemo_footer_wrapper { background: url(images/templatemo_footer.png) repeat-x; } #templatemo_footer { width: 910px; height: 85px; padding: 50px 40px 30px 40px; margin: 0 auto; text-align: center; color: #a9a098; } #templatemo_footer a { color: #d7d1cc; font-weight: normal; } #templatemo_footer a:hover { text-decoration: none; color: #FFFF33; } #templatemo_footer .footer_menu { margin: 0 0 30px 0; padding: 0px; list-style: none; } .footer_menu li { margin: 0px; padding: 0 20px; display: inline; border-right: 1px solid #d7d1cc; } .footer_menu li a { color: #d7d1cc; } .footer_menu .last_menu { border: none; } /* end of footer */ /*twitter*/ #twitter_div {border-top: 0px;} #twitter_div a {color: #0000ff !important;} #twitter_update_list {margin-left: -1em !important; margin-bottom: 0px !important;} #twitter_update_list li {list-style-type: none; padding-right: 5px; } #twitter_update_list li a {color: #0000ff; padding-right: 5px;} #twitter_div {border-bottom: 0px; padding-bottom: 10px; padding-top:6px; padding-right: 5px;} #twitter_div a, #twitter_update_list li a {text-decoration: none !important;} #twitter_div a:hover, #twitter_update_list li a:hover {text-decoration:underline !important;}

    Read the article

  • WSUS appears to be functioning, however errors are reported in Application Log (DSS Authentication W

    - by Richard Slater
    I re-installed WSUS a few months ago on a new server as part of a server hardware refresh. It is functioning normally downloading, authorizing and supplying patches to workstations. System Specification: HP DL360 G5 Quad 2.5 Zeon 6GB RAM Not Virtualized WSUS 3.2.7600.3226 SQL 2005 Express Windows 2003 R2 SP2 WSS SSL Enabled Every six hours five events are logged to the Application Event Log: Source | Category | ID | Description -------------------------------------------------------------------------------------- Windows Server Update Services | Web Services | 12052 | The DSS Authentication Web Service is not working. Windows Server Update Services | Web Services | 12042 | The SimpleAuth Web Service is not working Windows Server Update Services | Web Services | 12022 | The Client Web Service is not working Windows Server Update Services | Web Services | 12032 | The Server Synchronization Web Service is not working Windows Server Update Services | Web Services | 12012 | The API Remoting Web Service is not working In addition the following .NET Stack Trace is logged in C:\Program Files\Update Services\LogFiles\SoftwareDistribution.log each stack trace is identical except for the names of the services: 2009-11-27 11:56:52.757 UTC Error WsusService.10 HmtWebServices.CheckApiRemotingWebService ApiRemoting WebService WebException:System.Net.WebException: The request failed with HTTP status 403: Forbidden. at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.UpdateServices.Internal.ApiRemoting.Ping(Int32 pingLevel) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.CheckApiRemotingWebService(EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.CheckApiRemotingWebService(EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringTasks.ExecuteSubtask(HealthMonitoringSubtask subtask, EventLoggingType type, HealthEventLogger logger) at Microsoft.UpdateServices.Internal.HealthMonitoring.HmtWebServices.Execute(EventLoggingType type) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringTasks.Execute(EventLoggingType type) at Microsoft.UpdateServices.Internal.HealthMonitoring.HealthMonitoringThreadManager.Execute(Boolean waitIfNecessary, EventLoggingType loggingType) at Microsoft.UpdateServices.Internal.HealthMonitoring.RemotingChannel.PrivateLogEvents() at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext) at System.Runtime.Remoting.Messaging.ServerObjectTerminatorSink.SyncProcessMessage(IMessage reqMsg) at System.Runtime.Remoting.Lifetime.LeaseSink.SyncProcessMessage(IMessage msg) at System.Runtime.Remoting.Messaging.ServerContextTerminatorSink.SyncProcessMessage(IMessage reqMsg) at System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessageCallback(Object[] args) at System.Runtime.Remoting.Channels.ChannelServices.DispatchMessage(IServerChannelSinkStack sinkStack, IMessage msg, IMessage& replyMsg) at System.Runtime.Remoting.Channels.SoapServerFormatterSink.ProcessMessage(IServerChannelSinkStack sinkStack, IMessage requestMsg, ITransportHeaders requestHeaders, Stream requestStream, IMessage& responseMsg, ITransportHeaders& responseHeaders, Stream& responseStream) at System.Runtime.Remoting.Channels.BinaryServerFormatterSink.ProcessMessage(IServerChannelSinkStack sinkStack, IMessage requestMsg, ITransportHeaders requestHeaders, Stream requestStream, IMessage& responseMsg, ITransportHeaders& responseHeaders, Stream& responseStream) at System.Runtime.Remoting.Channels.Ipc.IpcServerTransportSink.ServiceRequest(Object state) at System.Runtime.Remoting.Channels.SocketHandler.ProcessRequestNow() at System.Runtime.Remoting.Channels.SocketHandler.BeginReadMessageCallback(IAsyncResult ar) at System.Runtime.Remoting.Channels.Ipc.IpcPort.AsyncFSCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) So far I have tried the following: Ensuring the settings were accurate as per TechNet. Checked that there was a suitable binding to 127.0.0.1 binding in IIS. Gone through and checked the settings in IIS as per TechNet. I have discovered that you can run the command wsusutil checkhealth to force the healt check to run, wsusutil can be found in C:\Program Files\Update Services\Tools. When this executese it will tell you to check the application log.

    Read the article

  • Varnish configuration to only cache for non-logged in users

    - by davidsmalley
    I have a Ruby on Rails application fronted by varnish+nginx. As most of the sites content is static unless you are a logged in user, I want to cache the site heavily with varnish when a user is logged out but only to cache static assets when they are logged in. When a user is logged in they will have the cookie 'user_credentials' present in their Cookie: header, in addition I need to skip caching on /login and /sessions in order that a user can get their 'user_credentials' cookie in the first place. Rails by default does not set a cache friendly Cache-control header, but my application sets a "public,s-max-age=60" header when a user is not logged in. Nginx is set to return 'far future' expires headers for all static assets. The configuration I have at the moment is totally bypassing the cache for everything when logged in, including static assets — and is returning cache MISS for everything when logged out. I've spent hours going around in circles and here is my current default.vcl director rails_director round-robin { { .backend = { .host = "xxx.xxx.xxx.xxx"; .port = "http"; .probe = { .url = "/lbcheck/lbuptest"; .timeout = 0.3 s; .window = 8; .threshold = 3; } } } } sub vcl_recv { if (req.url ~ "^/login") { pipe; } if (req.url ~ "^/sessions") { pipe; } # The regex used here matches the standard rails cache buster urls # e.g. /images/an-image.png?1234567 if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.cookie; lookup; } else { if (req.http.cookie ~ "user_credentials") { pipe; } } # Only cache GET and HEAD requests if (req.request != "GET" && req.request != "HEAD") { pipe; } } sub vcl_fetch { if (req.url ~ "^/login") { pass; } if (req.url ~ "^/sessions") { pass; } if (req.http.cookie ~ "user_credentials") { pass; } else { unset req.http.Set-Cookie; } # cache CSS and JS files if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.Set-Cookie; } if (obj.status >=400 && obj.status <500) { error 404 "File not found"; } if (obj.status >=500 && obj.status <600) { error 503 "File is Temporarily Unavailable"; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } }

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >