Search Results

Search found 19411 results on 777 pages for 'browser cache'.

Page 170/777 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • How do I handle the browser's "share page" intent in android?

    - by gfxmonk
    I read here ( http://androidlittle.blogspot.com/2009/08/intent-filter-for-share-link.html ) what intent-filter is required to handle the "share link" intent that the android web browser sends. I have placed this inside an block in my AndroidManifest.xml like so: <activity android:name=".ShareLink"> <intent-filter> <action android:name="android.intent.action.SEND" /> <category android:name="android.intent.category.DEFAULT" /> <data android:mimeType="text/plain" /> </intent-filter> <meta-data/> </activity> I cannot for the life of me get this to be triggered though. When I share a link in the android browser, the emulator log shows it's creating a chooser intent, but doesn't give the details of the intent the chooser is acting on. No chooser window pops up, and the intent gets handled by the SMS application. I have also tried kicking off the intent manually: adb shell am start -D -a android.intent.action.SEND -c android.intent.category.DEFAULT -t text/plain -d http://google.com/ but the response I get is: Starting: Intent { act=android.intent.action.SEND cat=[android.intent.category.DEFAULT] dat=http://google.com/ typ=text/plain } Error: Activity not started, unable to resolve Intent { act=android.intent.action.SEND cat=[android.intent.category.DEFAULT] dat=http://google.com/ typ=text/plain flg=0x10000000 } Can anyone tell me what I'm doing wrong? My main (launcher) activity works fine, so I assume there is no issue with installation on the emulator.

    Read the article

  • When is onBind or onCreate called in an android service browser plugin?

    - by anselm
    I have adapted the example plugin of the android source and the browser recognises the plugin without any problem. Here is an extract of AndroidManifest.xml: <application android:icon="@drawable/icon" android:label="@string/app_name" android:debuggable="true"> <service android:name="com.domain.plugin.PluginService"> <intent-filter> <action android:name="android.webkit.PLUGIN" /> </intent-filter> </service> </application> <uses-sdk android:minSdkVersion="7" /> <uses-permission android:name="android.webkit.permission.PLUGIN"></uses-permission> The actual Service class looks like so: public class PluginService extends Service { @Override public IBinder onBind(Intent arg0) { Log.d("PluginService", "onBind"); return null; } @Override public void onCreate() { Log.d("PluginService", "onCreate"); // TODO Auto-generated method stub super.onCreate(); AssetInstaller.getInstance(this).installAssets("/data/data/com.domain.plugin"); } } The AssetInstaller code is supposed to extract some files required by the actual plugin into the /data/data/com.domain.plugin directory, however wether onBind nor onCreate are called. But I get lot's of debug trace of the actual libnpplugin.so file I'm using. So the puzzle is when and under what circumstance is the Service bound or created in case of a browser plugin. As things look the service seems to be a dummy service. Having said that, is there another intent that can be executed at installation time probably? The only solution I see right now is installing the needed files from the native plugin code instead. Any ideas? I know this is quite a tricky question ;)

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc 2109, 2965)?

    - by Artyom
    Hello, According to RFC 2109, 2965 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109 and RFC2965 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • How to flush DNS cache in Windows Mobile programmatically?

    - by Bounded
    Hello, My windows mobile application (written in C# with the compact framework) needs to know if a particular machine is active or not. To achieve this goal, I thought to use a ping mechanism. I tried to use the Ping class implemented in the opennetcf framework (the System.Net.NetworkInformation.Ping class for the .NET Framework is not part of the compact framework). Because I give to the Ping.Send function a host name, it first tries to resolve this host name and to retrieve an IP address. But i observe the following problem : If the first dns resolution fails (because the network is down at this moment), and if the application tries immediately again to send the ping, it fails too, even if the network is note down anymore. I check with a famous network protocol analyzer and i saw that only the requests concerning the first dns resolution are sent. The requests concerning the dns resolution of the second ping are not sent. Why is the second dns request not sent ? Is there any dns cache mechanism on such Windows Mobile devices ? If yes, can this mechanism beeing flushed programmatically ? EDIT : I gave up finding a solution to this DNS flush. I chose to ping an IP adress instead of a name machine. The problem of pinging an hard coded adress IP is that we have to be 100% sure that this IP will not change. The gateway IP can be used because it's always reachable (if it does not, it means the network is down).

    Read the article

  • Is NSManagedObjectContext autosaved or am I looking at NSFetchedResultsController's cache?

    - by Andreas
    I'm developing an iPhone app where I use a NSFetchedResultsController in the main table view controller. I create it like this in the viewDidload of the main table view controller: NSSortDescriptor *sortDescriptorDate = [[NSSortDescriptor alloc] initWithKey:@"date" ascending:YES]; NSSortDescriptor *sortDescriptorTime = [[NSSortDescriptor alloc] initWithKey:@"start" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptorDate,sortDescriptorTime, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; [sortDescriptorDate release]; [sortDescriptorTime release]; [sortDescriptors release]; controller = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:context sectionNameKeyPath:@"date" cacheName:nil]; [fetchRequest release]; NSError *error; BOOL success = [controller performFetch:&error]; Then, in a subsequent view, I create a new object on the context: TestObject *testObject = [NSEntityDescription insertNewObjectForEntityForName:@"TestObject" inManagedObjectContext:context]; The TestObject has several related object which I create in the same way and add to the testObject using the provided add...Objects methods. Then, if before saving the context, I press cancel and go back to the main table view, nothing is shown as expected. However, if I restart the app, the object I created on the context shows in the main table view. How come? At first, I thought it was because the NSFetchedResultsController was reading from the cache, but as you can see I set this to nil just to test. Also, [context hasChanges] returns true after I restart. What am I missing here?

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • Browser freezes when try to call a JS function along with submission of a form.

    - by Waseem
    I have form in my view like following 1 <div> 2 <% form_tag facebook_user_path do %> 3 <label>Use my photo and name from facebook?</label><br /> 4 <%= check_box_tag 'use_name_and_photo', 'yes', true %> 5 <img src="<%= @user.pic %>" /><% @user.name %> 6 7 <%= submit_tag "Finish", :id => "use_name_and_photo_submit" %> 8 <% end %> 9 </div> I have attached some JS handlers using Jquery to this form. 1 var fb = { 2 extendedPermissions: function () { 3 $("#use_name_and_photo_submit").click(function (event) { 4 FB.Connect.showPermissionDialog("email,read_stream,publish_stream", function (perms) { 5 if (!perms) { 6 alert("You have to grant facebook extended permissions to further browse the application."); 7 } else { 8 $("form").submit(function () { 9 $.post($(this).attr("action"), $(this).serialize(), null, "script"); 10 }); 11 } 12 }); 13 event.preventDefault(); 14 return false; 15 }); 16 } 17 }; 18 19 $(document).ready(function () { 20 fb.extendedPermissions(); 21 }); What I want is that when the user clicks on the "Finish" button, he is prompted for the facebook permissions dialogue and when he gives the permissions, the form is submitted to FacebookUsersController. Right now when I click the "Finish" button, facebook permissions dialogue is initiated but before I am prompted for the actual permission submission window, the browser freezes. Just like I have pressed Esc during the process. In fact status bar of the browser says "Stopped". Any help is highly appreciated.

    Read the article

  • What is the process involved in viewing a webservice in a browser from within visual studio?

    - by Sam Holder
    I have created a new VS2008 ASP.Net Web service project, with the default name WebService1. If I right click on the Service1.asmx file and select 'View in Browser' what are the processes that go on to make this happen? I am asking because I have a situation where when I run this from a visual studio project started in our development shell (which sets up a common build environment) I cannot get the web service to show up in the browser. It starts the asp.net development server and creates a single file: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c43ddc22\268ae91b\hash\hash.web but when I start it from a stand alone project i get a whole slew of files in here: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\edad4eee\d198cf0e\App_Web_defaultwsdlhelpgenerator.aspx.cdcab7d2.vicgkf94.dll C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\edad4eee\d198cf0e\service1.asmx.cdcab7d2.compiled etc etc I am trying to debug this but not really getting anywhere. i have inspected the output from VS but the only option I get is for the build output, which is basic and doesn't really contain any information that is useful. I have tried running both versions with DebugView running but no output there either. I would like to know if there are any log files I could look at, or if anyone has any suggestions on how I might be able to debug what is going wrong here? For completeness the output I get when it doesn't work is: Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not create type 'WebService1.Service1'. Source Error: Line 1: Source File: /Service1.asmx Line: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082

    Read the article

  • How do i programmatically access the face cache in Windows Live Photo Gallery?

    - by acorderob
    I'm not talking about the "people tags" embeded in the XMP packets of JPEGs. I'm talking about the face database used to recognize new faces. I want to add to my program the option to recognize faces using the already trained database of WLPG. I managed to use the API (a type library dll) to detect faces, but to recognize them it needs an Exemplar Cache object that is not available in the same API. I could create my own object, but i want to use the already existing one to avoid duplicate training for the user. I know the database is in C:\Users\\AppData\Local\Microsoft\Windows Live Photo Gallery and that it is in an SQL Server Compact format. I tried to open the database with Visual Studio 2010, but it says that it is in an older version (pre-3.5) and needs to be upgraded. I don't want to change the database, just read it. I don't know how the WPLG reads it since apparently i don't have the correct OLEDB provider version. I would also prefer to read it without accesing the database directly but i don't see any DLL that exports that functionality. BTW, i'm using Delphi 2010. Any ideas?

    Read the article

  • Ruby On Rails with HTML5 offline apps - Firefox does not cache the application.manifest but Safari does

    - by hoitomt
    I'm working off of this Railscast tutorial: episode 247 I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching. Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest). Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)? Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files

    Read the article

  • Why does a Silverlight application show a blank browser screen when created from exported template?

    - by Edward Tanguay
    I created a silverlight app (without website) named TestApp, with one TextBox: <UserControl x:Class="TestApp.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480"> <Grid x:Name="LayoutRoot"> <TextBlock Text="this is a test"/> </Grid> </UserControl> I press F5 and see "this is a test" in my browser (firefox). I select File | Export Template | name it TestAppTemplate and save it. I create a new silverlight app based on the above template. The MainPage.xaml has the exact same XAML as above. I press F5 and see a blank screen in my browser. I look at the HTML source of both of these and they are identical. Everything I have compared in both projects is identical. What do I have to do so that a Silverlight application which is created from my exported template does not show a blank screen? (creating a WPF application from an exported template like this works fine)

    Read the article

  • how do Google and Yahoo replace the URL in the browser status bar?

    - by Mike W
    On the Google and Yahoo search pages, the URLs of the 10 search result links actually point to google.com or yahoo.com. The URLs have extra arguments that allow google.com or yahoo.com to redirect to the actual search result when the link is clicked. When the user mouses over the link, the search result URL (and not the google.com or yahoo.com URL) is displayed in the browser's status bar. I'm wondering how they do that. Many years ago, this would have been accomplished by having some javascript that sets window.status, but that doesn't seem to work anymore, as is explained by http://stackoverflow.com/questions/876390/reliable-cross-browser-way-of-setting-status-bar-text I have a link that looks like this: <a href="http://somedomain.com/ReallyLongURLThatShouldNotBeSeenInTheStatusBar" onmouseover="window.status='http://niceShourtUrl.com/'" onmouseout="window.status=''">Click Me</a> This link tried to use the window.status strategy, but it doesn't work. How do I fix this link so that it acts like the links on Google's and Yahoo's search result pages? In this example, I want "http://niceShourtUrl.com/" to be displayed in the status bar when the user mouses over the link.

    Read the article

  • How to automatically show Title of the Entries/Articles in the Browser Title Bar in ExpressionEngine 2?

    - by Ibn Saeed
    How would I output the title of an entry in ExpressionEngine and display it in the browser's title bar? Here is the content of my page's header: <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Test Site</title> <link rel="stylesheet" href="{stylesheet=site/site_css}" type="text/css" media="screen" /> </head> What I need is for each page to display the title of the entry in my browser's title bar — how can I achieve that? Part of UPDATED Code: Here is how i have done it : {exp:channel:entries channel="news_articles" status="open|Featured Top Story|Top Story" limit="1" disable="member_data|trackbacks|pagination"} {embed="includes/document_header" page_title=" | {title}"} <body class="home"> <div id="layoutWrapper"> {embed="includes/masthead_navigation"} <div id="content"> <div id="article"> <img src="{article_image}" alt="News Article Image" /> <h4>{title}</h4> <h5><span class="by">By</span> {article_author}</h5> <p>{entry_date format="%M %d, %Y"} -- Updated {gmt_edit_date format="%M %d, %Y"}</p> {article_body} {/exp:channel:entries} </div> What do you think?

    Read the article

  • anyone doing php-fpm and APC? cache files missing in /tmp

    - by Vangel
    I have been using xcache for a long time. Recently I put together php-fpm and nginx. I see apc is installed and enabled in the configuration. I was assuming that apc will automatically opcode the files and store it somewhere. According to config it should be in /tmp/apx.xxxx but there is no such thing there. What am i missing? any clues to investigate would be of much help. Please note I am using php 5.3 fpm. thanks mates. UPDATE: i looked at apc.php it says things are fine. will just have to take its word for it.

    Read the article

  • Is a larger hard drive with the same cache, rpm, and bus type faster?

    - by Joel Coehoorn
    I recently heard that, all else being equal, larger hard are faster than smaller. It has to do with more bits passing under the read head as the drive spins - since a large drive packs the bits more tightly, the same amount of spin/time presents more data to the read head. I had not heard this before, and was inclined to believe the the read heads expected bits at a specific rate and would instead stagger data, so that the two drives would be the same speed. I now find myself looking at purchasing one of two computer models for the school where I work. One model has an 80GB drive, the other a 400GB (for ~$13 more). The size of the drive is immaterial, since users will keep their files on a file server where they can be backed up. But if the 400GB drive will really deliver a performance boost to the hard drive, the extra money is probably worth it. Thoughts?

    Read the article

  • How much HDD space would I need to cache the web while respecting robot.txts?

    - by Koning Baard XIV
    I want to experiment with creating a web crawler. I'll start with indexing a few medium sized website like Stack Overflow or Smashing Magazine. If it works, I'd like to start crawling the entire web. I'll respect robot.txts. I save all html, pdf, word, excel, powerpoint, keynote, etc... documents (not exes, dmgs etc, just documents) in a MySQL DB. Next to that, I'll have a second table containing all restults and descriptions, and a table with words and on what page to find those words (aka an index). How much HDD space do you think I need to save all the pages? Is it as low as 1 TB or is it about 10 TB, 20? Maybe 30? 1000? Thanks

    Read the article

  • Squid not caching files (Randomly)

    - by Heinrich
    I want to use an intercepting squid server to cache specific large zip files that users in my network download frequently. I have configured squid on a gateway machine and caching is working for "static" zip files that are served from an Apache web server outside our network. The files that I want to have cached by squid are zip files 100MB which are served from a heroku-hosted Rails application. I set an ETag header (SHA hash of the zip file on the server) and Cache-Control: public header. However, these files are not cached by squid. This, for example, is a request that is not cached: $ curl --no-keepalive -v -o test.zip --header "X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a" --header "X-Customer: 5" "http://MY_APP.herokuapp.com/api/device/v1/media/download?version=latest" * Adding handle: conn: 0x7ffd4a804400 * Adding handle: send: 0 * Adding handle: recv: 0 ... > GET /api/device/v1/media/download?version=latest HTTP/1.1 > User-Agent: curl/7.30.0 > Host: MY_APP.herokuapp.com > Accept: */* > X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a > X-Customer: 5 > 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0< HTTP/1.1 200 OK * Server Cowboy is not blacklisted < Server: Cowboy < Date: Mon, 18 Aug 2014 14:13:27 GMT < Status: 200 OK < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < X-Content-Type-Options: nosniff < ETag: "95e888938c0d539b8dd74139beace67f" < Content-Disposition: attachment; filename="e7cce850ae728b81fe3f315d21a560af.zip" < Content-Transfer-Encoding: binary < Content-Length: 125727431 < Content-Type: application/zip < Cache-Control: public < X-Request-Id: 7ce6edb0-013a-4003-a331-94d2b8fae8ad < X-Runtime: 1.244251 < X-Cache: MISS from AAA.fritz.box < Via: 1.1 vegur, 1.1 AAA.fritz.box (squid/3.3.11) < Connection: keep-alive In the logs squid is reporting a TCP_MISS. This is the relevant excerpt from my squid file: # Squid normally listens to port 3128 http_port 3128 http_port 3129 intercept # Uncomment and adjust the following to add a disk cache directory. maximum_object_size 1000 MB maximum_object_size_in_memory 1000 MB cache_dir ufs /usr/local/var/cache/squid 10000 16 256 cache_mem 2000 MB # Leave coredumps in the first cache dir coredump_dir /usr/local/var/cache/squid cache_store_log daemon:/usr/local/var/logs/cache_store.log #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern -i .(zip) 525600 100% 525600 override-expire ignore-no-cache ignore-no-store refresh_pattern . 0 20% 4320 ## DNS Configuration dns_nameservers 8.8.8.8 8.8.4.4 After trying around for some time I realized that squid is sometimes deciding that my file is cacheable, sometimes not, depending on whether and when I enable/disable the dns_nameservers directive. What could be wrong here?

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • Does Exchange Cache Mode affect email markers refresh time in other Outlook clients?

    - by David
    We have users who share a single email account by using the Additional Email option under their accounts. Now, they want to assign emails to one another using the markers alongside the emails. We noticed that when changing the color of a marker, one Outlook client updated immediately, but another Outlook client did not. It looked like they were both set to "Cached Mode". Is it likely that caching effected the refresh of the client? Would it be better to turn off cached mode if we are using Outlook this way?

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Windows 8: Is Internet Explorer10 Metro mode disabled? Here is the fix.

    - by Gopinath
    Are you having issues with Internet Explorer 10 on Windows 8? Is Internet Explorer 10 application not launching in Metro mode and always launching in Desktop mode? Continue reading the post to understand and fix the problem. When Internet Explorer 10 does not launch in Metro mode? On Windows 8 when Internet Explorer is launched from Start screen it launches in Metro UI mode and when launched from Desktop it launches in traditional desktop UI mode. But when Internet Explorer  is not set as default browser, Windows 8 always launches Internet Explorer in Desktop mode irrespective of whether its launched from Start screen or Desktop. This generally happens when we install the browsers Firefox or Chrome and set them as default browser. How to restore Internet Explorer 10 Metro mode? The problem can be resolved by setting Internet Explorer 10 as the default web browser in Windows 8. But setting default web browser in Windows 8 is a bit tricky as there is no option available Options screen of  of Internet Explorer. To set Internet Explorer 10 as the default web browser follow these steps 1. Go to Start screen of Windows 8 and search for Default Programs app by typing Default using keyboard. 2. Click on the link Set your default programs 3. Choose Internet Explorer on the left section of the screen(pointer 1) and then click on the option Set this program as default available at the bottom right section(pointer 2) 4. That’s it. Now Internet Explorer is set as default web browser and from now onwards you will be able to launch Metro UI based Internet Explorer from Start screen

    Read the article

  • Tip: Recording Non-Maximized Applications in UPK

    - by Marc Santosusso
    Have you ever wanted to record an application that would not maximize, or an application that would look strange maximized? Or perhaps your Windows Desktop has become cluttered with icons and you don't want to capture the clutter in your recordings. Here's a tip that will help: create a background for your recording. Create a blank HTML file with a black background in your favorite HTML editor. Or download this sample file: UPK_Recording_Background.html (right click to save). If you would prefer a different color background in the sample file, open it in Notepad and change “#000” to a different HTML color. Open UPK_Recording_Background.html in its own web browser window. Press F11 to make the web browser window full screen. This should give you a completely black screen. (This works great in modern versions of the most popular browsers. I successfully used Firefox 15, Chrome 22, and IE 9. Open or switch to the desired application so that it sits on top of the full screen browser window. If the application you are recording is also in a browser, it is important that it be in a separate browser window from the UPK_Recording_Background.html. Record your topic normally. The above steps create a recording background using an HTML file and a web browser. This is just one method, for instance you could do the same thing with an image editor and an image viewer with a full screen view. Now you can record a non-maximized application without a distracting background. I hope you find this to be a helpful tip. Let us know what you think in the comments.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >