Search Results

Search found 14773 results on 591 pages for 'smart flash cache'.

Page 271/591 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Facing gestures in Linux mint Chrome Browser

    - by aravind.udayashankara
    I am using Linux mint OS since an Year , I use to consistently download updates and install , I use chrome as a default browser , when ever I open youtube and watch some video , I listen to some gestures in sound ( say repeated lyrics of song ) while it is playing , In firefox it is working fine . What is the problem am I missing any plugin , AFAIK Chrome doesn't need a flash player plug in , It has a built in flash player . IS that the problem ? And also previously I was not facing this , recently I started using Cinimon UI centOS after this all these kind of problems started MY hard ware is 64 bit intel core i3 and also I have installed linux mint 64 bit Please let me know what is the problem and how to fix this . Thanks in advance for responding to this post

    Read the article

  • Unable to reach files in subfolder with domain name in path in IIS 5.

    - by Chuck Conway
    In IIS 5 files in the url: http://acme.com/_cache/cache-www.acme.com/v3.css are not accessible. All files below "cache-www.acme.com" are unreachable. I've verified that the files exists. Permissions are not a problem. I've assigned "Everyone" to the files and give "Everyone" full rights. What I have determined is in IIS 5 if there is a domain in the folder path, IIS 5 gets confused... Other javascript files outside the directory comedown fine... Any thoughts?

    Read the article

  • Internet Explorer defaults to 64-bit version

    - by Tim Long
    My IE8 has suddenly started defaulting to the 64-bit version. I have no idea how or why this has happened, but I suspect it might be linked to the Browser Choice Screen that Microsoft was recently forced to display by EU law. However, many web sites will not display correctly in IE8 x64 (eg. sites that use Adobe Flash or Microsoft Silverlight). I have the 32-bit version of IE pinned to my taskbar and if I launch it manually, everything is fine. But when I click on a URL from another program and IE is not already running, then the 64-bit version gets launched. This really messes with programs like BBC iPlayer which rely heavily on Adbobe Air and Flash. So, how do I get IE8 32-bit version to be the default version again? I've tried using the "default programs" control panel and that doesn;t make any difference (in fact, it doesn't give the choice between x84 and x64 versions, it just lists "internet explorer").

    Read the article

  • Website visitors are still being redirected after "fixing" the damage from a conditional redirect website attack

    - by Shannon
    BACKGROUND A website of mine was recently the target of a conditional redirect attack. PHP code was added to my pages to redirect visitors. The .htaccess file was edited to redirect visitors. I've re-uploaded my website so the compromised PHP and .htaccess code have both been removed. My site is mostly handwritten php and static HTML content. I don't use page comments or any third party libraries. THE PROBLEM After removing the compromised php and htaccess files, visitors are still being re-directed. What could be the reason that visitors are still being redirected? Are there any tools to check where/how redirects are taking place so I can debug the problem? UPDATE - PROBLEM FIXED As suggested in the comments, I cleared my Firefox cache and that fixed the problem (for me anyway). Visitors with old cache data will obviously still be re-directed.

    Read the article

  • Python BOM error in Ascii file

    - by Intosia
    I have a wierd annoying problem with Python 2.6 I trying to run this file (and the other), on my Embedded Linux ARM board. http://svn.tuxisalive.com/software_suite_v3/smart-core/smart-server/trunk/TDSService.py I get this error File "tuxhttpserver.py", line 1 SyntaxError: encoding problem: with BOM I know that error is about the BOM bytes etc etc. BUT, there are NO BOM bytes, its plain Ascii. I checked with a Hexeditor, and the linux File command says its Ascii. Im freaking out here... The code worked fine on my Sheevaplug (also a ARM based system).

    Read the article

  • APC serving old code intermittently running with Lighttpd and PHP Fast CGI

    - by APZ
    I recently started facing this problem that APC shows old code when we upload a html template file to fix/ change something on our websites. We run APC with Stat=0 and want to keep it that way because we seldomly make changes to templates. Every time we upload a template we make sure to flush APC cache and we execute this script(shown only some part of the script here) to clear the cache: apc_clear_cache(); apc_clear_cache('user'); apc_clear_cache('system'); apc_clear_cache(opcode); We use lightpd and PHP Fast CGI and fast cgi has "max-procs" = 2, "PHP_FCGI_CHILDREN" = "5", Even after flushing APC once upload is complete it serves the old template intermittently. Any help would be appreciated.

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

  • eTrayz: Replace base system with a bootstrapped Debian

    - by knoopx
    I bought an eTrayz NAS time ago. The device is more or less good but it ships with a closed-source custom linux and a bunch of broken web-apps. I wanted to replace the whole system with a raw Debian installation. I successfully bootstrapped a Lenny Debian into a chroot and I'm able to use use it. However I would like it to be the default system and to boot automatically at login. The device itself ships with a bundled 2.6.24.4 kernel. I think the kernel is on a dedicated flash memory so I would prefere not to re-flash it. What do you think is the best way to accomplish it?

    Read the article

  • Wrong chrome full screen resolution with multiple displays

    - by Martin
    I am using Google Chrome on Windows 7 64bit with two displays. The displays have different resolutions and aspect-ratios. My problem is best visible when playing back full-screen videos from a flash player inside chrome (but it is not flash related). The maximum size is calculated from the primary displays' resolution, even when the relevant window is maximised on the second screen. The problem is also visible on websites that set the width to 100%. The 100% then apply to the primary display, even when the window is opened on the secondary display. Are there any known solutions to this problem? I am observing it since many Chrome versions, do not know if it has been ever correct.

    Read the article

  • error while installing binutils in LFS

    - by user53347
    lfs:/mnt/lfs/sources/binutils-build$ ../binutils-2.15.94.0.2.2/configure \ --target=$LFS_TGT --prefix=/tools \ --disable-nls --disable-werror loading cache ./config.cache checking host system type... i686-pc-linux-gnuoldld checking target system type... i686-lfs-linux-gnu checking build system type... i686-pc-linux-gnuoldld checking for a BSD compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for gcc... no checking for cc... no configure: error: no acceptable cc found in $PATH

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • Explain DLL Dependencies to a lay person

    - by wheaties
    This follows from a previous posting I made about lack of a clean test machine for software installations. I'm doing a bad job of explaining how DLL dependencies work and how some machines might not have the right libraries at the time of installation. The problem is that it's being viewed as a defect with the build process. I'm trying to educate the higher ups that it's not the build process per se but rather the installation process which is to blame. Here's a quote from my boss relating subcontractor work to our work to put it into perspective: I'm not a software person. All I see is that when they hand something to us it just works but when we hand something to the client there's all sorts of problems. There must be something wrong with how you're building the code. It's very easy to see how someone who is smart (scarily smart) could come to the wrong conclusion. So how would you explain the whole DLL dependency issue?

    Read the article

  • What's the performance penalty of weak_ptr?

    - by Kornel Kisielewicz
    I'm currently designing a object structure for a game, and the most natural organization in my case became a tree. Being a great fan of smart pointers I use shared_ptr's exclusively. However, in this case, the children in the tree will need access to it's parent (example -- beings on map need to be able to access map data -- ergo the data of their parents. The direction of owning is of course that a map owns it's beings, so holds shared pointers to them. To access the map data from within a being we however need a pointer to the parent -- the smart pointer way is to use a reference, ergo a weak_ptr. However, I once read that locking a weak_ptr is a expensive operation -- maybe that's not true anymore -- but considering that the weak_ptr will be locked very often, I'm concerned that this design is doomed with poor performance. Hence the question: What is the performance penalty of locking a weak_ptr? How significant is it?

    Read the article

  • Does Apache 2.2 (windows) have any default bandwidth limit?

    - by igino manfre'
    I'm running Apache on a server in cloud (Windows server 2008 R2 on VMware, 1 Gbps of BW, http://95.110.164.61 ). I'm streaming many live DVB MPEG Transport Stream, precompressed in loop, (not flash) generated by VLC on port 640xx and then reverse proxied by Apache on port 80. The server's firewall is open for VLC and Apache on all ports. Above 1.5 Mbps the reproduction is affected by continous stop & go. Please note that if you request a stream generated by VLC directly at http://95.110.164.61:64087/mpg2_6.4 you see a correct stream, while if you request http://95.110.164.61/mpg2_6.4 you do not. I know that Flash streaming Server uses Apache to stream on port 80 (and it works). I'm not an expert with Apache, can anyone tell me if any "special" module is required to increase the bandwidth?

    Read the article

  • How to make google Chrome omnibar search sites permanent?

    - by tim
    In google Chrome, when you have visited a site, say wikipedia.com, your can thereafter search the site directly from the omnibar by typing in a few letters then hitting tab. However, I noticed that after I clear my cache, Chrome does not remember the search autofills, and I once again have to visit the site manually for it to take effect. Is there a way to make the omnibar searches permanent even after doing a full cache clear in google Chrome? Thanks. Update: I tried the suggestion below, but bookmarking the site just allowed me to autocomplete the address. It did not allow me the option to hit tab to do the search directly in the omnibar. Any suggestions?

    Read the article

  • Visual Studio - how to create two projects using the same sources

    - by mack369
    My solution consists of 2 executable projects and a couple dlls. Project1 is a Smart Device Project, Project2 is a Windows Forms Project. Both projects use the same libraries, the reason of that is I want to test my libraries on PC before I deploy it on the device. The problem is that the DLL project type can be Smart Device Class Library or Class Library, not both. I cannot add a reference from SD project to WF and vice versa. I was able to add reference from SD project to a dll file (generated from Class Library project) instead of the project itself, but for some reason I got the message "cannot load XXX type from YYY assembly". It doesn't depend on my code, because when I created separate project for the same sources, everything was fine. The only solution I've found is to create 2 types of projects for each library, but I don't know how to make 2 projects based on the same sources.

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • Debug unstable Apache server under Debian

    - by almo
    Since yesterday my Apache server that runs on a Debian machine runs very unstable. Sometiems my websites load and sometimes not. I think it has to do with the memory since my Apache log is full of Out of memory (allocated 262144) (tried to allocate 4480 bytes). I also attached a screenshot of the memory graph. A server restart resolves the problem temporarily. I looked at the processes that are using memory but the biggest one is MySQL with 6.5%. Where else can look for the problem? Edit: I did a free -m right after rebooting and one about 2 hours later. I think the trend is visible: root@xxx:~# free -m total used free shared buffers cached Mem: 4016 731 3284 0 80 200 -/+ buffers/cache: 449 3566 Swap: 459 0 459 root@xxx:~# free -m total used free shared buffers cached Mem: 4016 2466 1550 0 92 473 -/+ buffers/cache: 1900 2115 Swap: 459 0 459

    Read the article

  • How best to modernize the 2002-era J2EE app?

    - by user331465
    I have this friend.... I have this friend who works on a java ee application (j2ee) application started in the early 2000's. Currently they add a feature here and there, but have a large codebase. Over the years the team has shrunk by 70%. [Yes, the "i have this friend is". It's me, attempting to humorously inject teenage high-school counselor shame into the mix] Java, Vintage 2002 The application uses EJB 2.1, struts 1.x, DAO's etc with straight jdbc calls (mixture of stored procedures and prepared statements). No ORM. For caching they use a mixture of OpenSymphony OSCache and a home-grown cache layer. Over the last few years, they have spent effort to modernize the UI using ajax techniques and libraries. This largely involves javascript libaries (jquery, yui, etc). Client Side On the client side, the lack of upgrade path from struts1 to struts2 discouraged them from migrating to struts2. Other web frameworks became popular (wicket, spring , jsf). Struts2 was not the "clear winner". Migrating all the existing UI from Struts1 to Struts2/wicket/etc did not seem to present much marginal benefit at a very high cost. They did not want to have a patchwork of technologies-du-jour (subsystem X in Struts2, subsystem Y in Wicket, etc.) so developer write new features using Struts 1. Server Side On the server side, they looked into moving to ejb 3, but never had a big impetus. The developers are all comfortable with ejb-jar.xml, EJBHome, EJBRemote, that "ejb 2.1 as is" represented the path of least resistance. One big complaint about the ejb environment: programmers still pretend "ejb server runs in separate jvm than servlet engine". No app server (jboss/weblogic) has ever enforced this separation. The team has never deployed the ejb server on a separate box then the app server. The ear file contains multiple copies of the same jar file; one for the 'web layer' (foo.war/WEB-INF/lib) and one for the server side (foo.ear/). The app server only loads one jar. The duplications makes for ambiguity. Caching As for caching, they use several cache implementations: OpenSymphony cache and a homegrown cache. Jgroups provides clustering support Now What? The question: The team currently has spare cycles to to invest in modernizing the application? Where would the smart investor spend them? The main criteria: 1) productivity gains. Specifically reducing the time to develope new subsystems features and reduced maintenance. 2) performance/scalability. They do not care about fashion or techno-du-jour street cred. What do you all recommend? On the persistence side Switch everything (or new development only) to JPA/JPA2? Straight hibernate? Wait for Java EE 6? On the client/web-framework side: Migrate (some or all) to struts2? wicket? jsf/jsf2? As for caching: terracotta? ehcache? coherence? stick with what they have? how best to take advantage of the huge heap sizes that the 64-bit jvms offer? Thanks in advance.

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Rails: (Devise) Two different methods for new users?

    - by neezer
    I have a Rails 3 app with authentication setup using Devise with the registerable module enabled. I want to have new users who sign up using our outside register form to use the full Devise registerable module, which is happening now. However, I also want the admin user to be able to create new users directly, bypassing (I think) Devise's registerable module. With registerable disabled, my standard UsersController works as I want it to for the admin user, just like any other Rail scaffold. However, now new users can't register on their own. With registerable enabled, my standard UsersController is never called for the new user action (calling Devise::RegistrationsController instead), and my CRUD actions don't seem to work at all (I get dumped back onto my root page with no new user created and no flash message). Here's the log from the request: Started POST "/users" for 127.0.0.1 at 2010-12-20 11:49:31 -0500 Processing by Devise::RegistrationsController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"18697r4syNNWHfMTkDCwcDYphjos+68rPFsaYKVjo8Y=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "role"=>"manager"}, "commit"=>"Create User"} SQL (0.9ms) ... User Load (0.6ms) SELECT "users".* FROM "users" WHERE ("users"."id" = 2) LIMIT 1 SQL (0.9ms) ... Redirected to http://test-app.local/ Completed 302 Found in 192ms ... but I am able to register new users through the outside form. How can I get both of these methods to work together, such that my admin user can manually create new users and guest users can register on their own? I have my Users controller setup for standard CRUD: class UsersController < ApplicationController load_and_authorize_resource def index @users = User.where("id NOT IN (?)", current_user.id) # don't display the current user in the users list; go to account management to edit current user details end def new @user = User.new end def create @user = User.new(params[:user]) if @user.save flash[:notice] = "#{ @user.email } created." redirect_to users_path else render :action => 'new' end end def edit end def update params[:user].delete(:password) if params[:user][:password].blank? params[:user].delete(:password_confirmation) if params[:user][:password].blank? and params[:user][:password_confirmation].blank? if @user.update_attributes(params[:user]) flash[:notice] = "Successfully updated User." redirect_to users_path else render :action => 'edit' end end def delete end def destroy redirect_to users_path and return if params[:cancel] if @user.destroy flash[:notice] = "#{ @user.email } deleted." redirect_to users_path end end end And my routes setup as follows: TestApp::Application.routes.draw do devise_for :users devise_scope :user do get "/login", :to => "devise/sessions#new", :as => :new_user_session get "/logout", :to => "devise/sessions#destroy", :as => :destroy_user_session end resources :users do get :delete, :on => :member end authenticate :user do root :to => "application#index" end root :to => "devise/session#new" end

    Read the article

  • TCP 30 small packets per second flood connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s - ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software - it can compute about 50k small packets/s so software is not a problem). I measure my ping through flash client - send packet with timestamp and immediatly send from server to client.

    Read the article

  • How SSD's drives reduce their latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives managed to reduce latency? (or at least lot better than what a single NAND chip without any controller can do)

    Read the article

  • desktop module for existing web application

    - by maxxxee
    My client has a running web application which has been online for more than a year. Recently the client has introduced smart cards for his employees. Because of the difficulty in integrating smart card with its api on a web interface(i will post another detailed question on this later) we are planning to have desktop interface for this. There are 10-20 terminals which will use the desktop interface. 3 approaches for doing this that I have considered : Direct connection and operations on DB-Not using this because of data integrity and consistency issues. Build web service end points and use it from desktop interface Build a dll with common functions and use from both web and desktop Questions: 1. What are your opinions based on 2 and 3 approach? 2. Any other approach that I should consider? Note: I am using .Net framework, web application in asp.net

    Read the article

  • Best usb storage for my router, Asus RT-AC66U?

    - by Jason94
    I have the ASUS RT-AC66U and I want to add a USB storage to it. It has 2x USB, and Im already using one for my printer. So the last one I want to use to attach a USB storage, and I've read some reviews stating the throughput of the USB could be up to 18 mb/s. So in regard of USB storage, should I care about hard disk cache? Simple powered-over-usb seems to have 8 mb cache, other (externally powered) has 16 for instance.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >