Search Results

Search found 16787 results on 672 pages for 'mod disk cache'.

Page 287/672 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Configure fallback redis server

    - by snøreven
    I am using redis as a cache server. Can I somehow configure multiple redis servers, that the cache is fully functional (read/write) even if some of them go offline? I looked into master-slave, but the problem I see there is, that if the master fails, and I allow writes to the slaves, they get overwritten once the master is up again. Now the master just serves the old data. The only solution I could come up was disabling write-to-disc, but that sucks as I loose everything if I have to restart the master. And I guess, slaves wouldn't be synced anymore if the master is gone.

    Read the article

  • CMD Prompt > FTP Size Command

    - by Samuel Baldus
    I'm developing a PHP App on IIS 7.5, which uses PHP FTP commands. These all work, apart from ftp_size(). I've tested: cmd.exe > ftp host > username > password > SIZE filename = Invalid Command However, if I access the FTP site through an Internet Browser, the filesize is displayed. Do I need to install FTP Extensions, and if so, which ones and where do I get them? Here is the PHP Code: <?php // FTP Credentials $ftpServer = "www.domain.com"; $ftpUser = "username"; $ftpPass = "password"; // Unlimited Time set_time_limit(0); // Connect to FTP Server $conn = @ftp_connect($ftpServer) or die("Couldn't connect to FTP server"); // Login to FTP Site $login = @ftp_login($conn, $ftpUser, $ftpPass) or die("Login credentials were rejected"); // Set FTP Passive Mode = True ftp_pasv ($conn, true); // Build the file list $ftp_nlist = ftp_nlist($conn, "."); // Alphabetical sorting sort($ftp_nlist); // Display Output foreach ($ftp_nlist as $raw_file) { // Get the last modified time $mod = ftp_mdtm($conn, $raw_file); // Get the file size $size = ftp_size($conn, $raw_file); // Size is not '-1' => file if (!(ftp_size($conn, $raw_file) == -1)) { //output as file echo "Filename: $raw_file<br />"; echo "FileSize: ".number_format($size, '')."Kb</br>"; echo "Last Modified: ".date("d/m/Y H:i", $mod)."</br>"; } } ?>

    Read the article

  • Javascript CS-PRNG - 64-bit random

    - by Jack
    Hi, I need to generate a cryptographically secure 64-bit unsigned random integer in Javascript. The first problem is that Javascript only allows 64-bit signed integers, so 9223372036854775808 is the biggest supported integer without going into floating point use I think? To fix this I can use a big number library, no problem. My Method: var randNum = SHA256( randBigInt(128, 0) ) % 2^64; Where SHA256() is a secure hash function and randBigInt() is defined below as a non-crypto PRNG, im giving it a 128bit seed so brute force shouldn't be a problem. randBigInt(n,s) //return an n-bit random BigInt (n>=1). If s=1, then the most significant of those n bits is set to 1. Is this a secure method to generate a cryptographically secure 64-bit random int? And importantly does taking the 2^64 mod guarantee 100% I have a 64-bit number? An abstract example, say this number is prime (it isn't i know), I will use it in the Galois Field [2^p], where p must be 64bits so that every possible 1-63bit number is a field element. In this query, my random int must be larger than any 63-bit number. And Im not sure im correct in taking the 2^64 mod of a 256bit hash output. Thanks (hope that makes sense)

    Read the article

  • error while installing binutils in LFS

    - by user53347
    lfs:/mnt/lfs/sources/binutils-build$ ../binutils-2.15.94.0.2.2/configure \ --target=$LFS_TGT --prefix=/tools \ --disable-nls --disable-werror loading cache ./config.cache checking host system type... i686-pc-linux-gnuoldld checking target system type... i686-lfs-linux-gnu checking build system type... i686-pc-linux-gnuoldld checking for a BSD compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for gcc... no checking for cc... no configure: error: no acceptable cc found in $PATH

    Read the article

  • How to make google Chrome omnibar search sites permanent?

    - by tim
    In google Chrome, when you have visited a site, say wikipedia.com, your can thereafter search the site directly from the omnibar by typing in a few letters then hitting tab. However, I noticed that after I clear my cache, Chrome does not remember the search autofills, and I once again have to visit the site manually for it to take effect. Is there a way to make the omnibar searches permanent even after doing a full cache clear in google Chrome? Thanks. Update: I tried the suggestion below, but bookmarking the site just allowed me to autocomplete the address. It did not allow me the option to hit tab to do the search directly in the omnibar. Any suggestions?

    Read the article

  • Are there any libraries for parsing "number expressions" like 1,2-9,33- in Java

    - by mihi
    Hi, I don't think it is hard, just tedious to write: Some small free (as in beer) library where I can put in a String like 1,2-9,33- and it can tell me whether a given number matches that expression. Just like most programs have in their print range dialogs. Special functions for matching odd or even numbers only, or matching every number that is 2 mod 5 (or something like that) would be nice, but not needed. The only operation I have to perform on this list is whether the range contains a given (nonnegative) integer value; more operations like max/min value (if they exist) or an iterator would be nice, of course. What would be needed that it does not occupy lots of RAM if anyone enters 1-10000000 but the only number I will ever query is 12345 :-) (To implement it, I would parse a list into several (min/max/value/mod) pairs, like 1,10,0,1 for 1-10 or 11,33,1,2 for 1-33odd, or 12,62,2,10 for 12-62/10 (i. e. 12, 22, 32, ..., 62) and then check each number for all the intervals. Open intervals by using Integer.MaxValue etc. If there are no libs, any ideas to do it better/more efficient?)

    Read the article

  • APC serving old code intermittently running with Lighttpd and PHP Fast CGI

    - by APZ
    I recently started facing this problem that APC shows old code when we upload a html template file to fix/ change something on our websites. We run APC with Stat=0 and want to keep it that way because we seldomly make changes to templates. Every time we upload a template we make sure to flush APC cache and we execute this script(shown only some part of the script here) to clear the cache: apc_clear_cache(); apc_clear_cache('user'); apc_clear_cache('system'); apc_clear_cache(opcode); We use lightpd and PHP Fast CGI and fast cgi has "max-procs" = 2, "PHP_FCGI_CHILDREN" = "5", Even after flushing APC once upload is complete it serves the old template intermittently. Any help would be appreciated.

    Read the article

  • How best to modernize the 2002-era J2EE app?

    - by user331465
    I have this friend.... I have this friend who works on a java ee application (j2ee) application started in the early 2000's. Currently they add a feature here and there, but have a large codebase. Over the years the team has shrunk by 70%. [Yes, the "i have this friend is". It's me, attempting to humorously inject teenage high-school counselor shame into the mix] Java, Vintage 2002 The application uses EJB 2.1, struts 1.x, DAO's etc with straight jdbc calls (mixture of stored procedures and prepared statements). No ORM. For caching they use a mixture of OpenSymphony OSCache and a home-grown cache layer. Over the last few years, they have spent effort to modernize the UI using ajax techniques and libraries. This largely involves javascript libaries (jquery, yui, etc). Client Side On the client side, the lack of upgrade path from struts1 to struts2 discouraged them from migrating to struts2. Other web frameworks became popular (wicket, spring , jsf). Struts2 was not the "clear winner". Migrating all the existing UI from Struts1 to Struts2/wicket/etc did not seem to present much marginal benefit at a very high cost. They did not want to have a patchwork of technologies-du-jour (subsystem X in Struts2, subsystem Y in Wicket, etc.) so developer write new features using Struts 1. Server Side On the server side, they looked into moving to ejb 3, but never had a big impetus. The developers are all comfortable with ejb-jar.xml, EJBHome, EJBRemote, that "ejb 2.1 as is" represented the path of least resistance. One big complaint about the ejb environment: programmers still pretend "ejb server runs in separate jvm than servlet engine". No app server (jboss/weblogic) has ever enforced this separation. The team has never deployed the ejb server on a separate box then the app server. The ear file contains multiple copies of the same jar file; one for the 'web layer' (foo.war/WEB-INF/lib) and one for the server side (foo.ear/). The app server only loads one jar. The duplications makes for ambiguity. Caching As for caching, they use several cache implementations: OpenSymphony cache and a homegrown cache. Jgroups provides clustering support Now What? The question: The team currently has spare cycles to to invest in modernizing the application? Where would the smart investor spend them? The main criteria: 1) productivity gains. Specifically reducing the time to develope new subsystems features and reduced maintenance. 2) performance/scalability. They do not care about fashion or techno-du-jour street cred. What do you all recommend? On the persistence side Switch everything (or new development only) to JPA/JPA2? Straight hibernate? Wait for Java EE 6? On the client/web-framework side: Migrate (some or all) to struts2? wicket? jsf/jsf2? As for caching: terracotta? ehcache? coherence? stick with what they have? how best to take advantage of the huge heap sizes that the 64-bit jvms offer? Thanks in advance.

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • Ubuntu Linux main utilities

    - by Harry
    I've never used Ubuntu Linux before, but I am researching about the main system tools that are included, e.g. Windows has Disk Cleanup, Disk Defrag... but what does Ubuntu Linux have. I need to know what the main five utilities that are included on Ubuntu Linux and what do they do.

    Read the article

  • Approximate timings for various operations on a "typical desktop PC" anno 2010

    - by knorv
    In the article "Teach Yourself Programming in Ten Years" Peter Norvig (Director of Research, Google) gives the following approximate timings for various operations on a typical 1GHz PC back in 2001: execute single instruction = 1 nanosec = (1/1,000,000,000) sec fetch word from L1 cache memory = 2 nanosec fetch word from main memory = 10 nanosec fetch word from consecutive disk location = 200 nanosec fetch word from new disk location (seek) = 8,000,000 nanosec = 8 millisec What would the corresponding timings be for your definition of a typical PC desktop anno 2010?

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • MySQL help, counting information on last records

    - by ee12csvt
    I need some advice I have two tables, one holds unique serial numbers of items (items) and the other holds status changes and other information for these items (details) The Tables are set up as follows Item itemID itemName itemDate details detID itemID modlvl status detDate All items have at least one record in the details table, but over time the status has changed or the modification level has changed (Both of these are identified by numbers which are held in other appropriate tables) and a new record is created each time the status/modlvl changes I want to display a table on my webpage using php that identifies the different mod levels of the items and shows a count of each of the current status of the items EDIT Hi Ronnis, This is an example of the data in the tables and what I want to achieve The current Mod Levels range from 1 to 3 Status representations are 1 In Use 2 In Store 3 Being repaired 4 In Transit 5 For Disposal 6 Disposed 7 Lost Item itemID OrigMod created 1000 1 2009-10-01 22:12:12 1001 1 2009-10-01 22:12:12 1002 1 2009-10-01 22:12:12 1003 1 2009-10-01 22:12:12 1004 1 2009-10-01 22:12:12 1005 1 2009-10-01 22:12:12 1006 1 2009-10-01 22:12:12 1007 1 2009-10-01 22:12:12 1008 1 2009-10-01 22:12:12 1009 1 2009-10-01 22:12:12 1010 1 2009-10-01 22:12:12 Details detID itemID modlvl detDate status 1 1000 1 2009-10-01 1 2 1001 1 2009-10-01 1 3 1002 1 2009-10-01 1 4 1003 1 2009-10-01 1 5 1004 1 2009-10-01 1 6 1005 1 2009-10-01 1 7 1006 1 2009-10-01 1 8 1007 1 2009-10-01 1 9 1008 1 2009-10-01 1 10 1009 1 2009-10-01 1 11 1010 1 2009-10-01 1 12 1001 1 2010-02-01 2 13 1001 1 2010-02-03 4 14 1001 1 2010-03-01 3 15 1000 1 2010-03-14 2 16 1001 2 2010-04-01 4 17 1006 1 2010-04-01 2 18 1001 2 2010-04-03 2 19 1006 1 2010-04-14 4 20 1006 1 2010-05-01 5 21 1002 1 2010-05-02 2 22 1003 1 2010-05-10 2 23 1010 1 2010-06-01 2 24 1006 1 2010-06-18 6 25 1010 1 2010-07-01 7 26 1007 1 2010-07-02 2 27 1007 1 2010-07-04 4 28 1003 1 2010-07-10 2 29 1007 1 2010-07-11 3 30 1007 2 2010-07-12 4 31 1007 2 2010-07-15 2 32 1001 2 2010-08-31 1 33 1001 2 2010-09-10 2 34 1001 2 2010-10-01 4 35 1008 1 2010-10-01 2 36 1001 2 2010-10-05 3 37 1008 1 2010-10-05 4 38 1008 1 2010-10-10 3 39 1001 3 2010-10-20 4 40 1001 3 2010-10-25 2 Using the tables above I want to get this result MoLvl Use Store Repd Transit Displ Dispd Lost Total 1 3 3 1 0 0 1 1 9 2 0 1 0 0 0 0 0 1 3 0 1 0 0 0 0 0 1 Total 3 5 1 0 0 1 1 11

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

  • Saving a stream to file

    - by Loadman
    Hi, I have a StreamReader object that I initialized with a stream, now I want to save this stream to disk (the stream may be a gif or jpg or pdf etc). SO my code so far is: StreamReader sr = new StreamReader(myOtherObject.InputStream); I need to save this to disk (I have the filename). In the future I may want to store this to sqlserver I have the encoding type also, which I will need if I store it to sql server right?

    Read the article

  • Debug unstable Apache server under Debian

    - by almo
    Since yesterday my Apache server that runs on a Debian machine runs very unstable. Sometiems my websites load and sometimes not. I think it has to do with the memory since my Apache log is full of Out of memory (allocated 262144) (tried to allocate 4480 bytes). I also attached a screenshot of the memory graph. A server restart resolves the problem temporarily. I looked at the processes that are using memory but the biggest one is MySQL with 6.5%. Where else can look for the problem? Edit: I did a free -m right after rebooting and one about 2 hours later. I think the trend is visible: root@xxx:~# free -m total used free shared buffers cached Mem: 4016 731 3284 0 80 200 -/+ buffers/cache: 449 3566 Swap: 459 0 459 root@xxx:~# free -m total used free shared buffers cached Mem: 4016 2466 1550 0 92 473 -/+ buffers/cache: 1900 2115 Swap: 459 0 459

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • Best usb storage for my router, Asus RT-AC66U?

    - by Jason94
    I have the ASUS RT-AC66U and I want to add a USB storage to it. It has 2x USB, and Im already using one for my printer. So the last one I want to use to attach a USB storage, and I've read some reviews stating the throughput of the USB could be up to 18 mb/s. So in regard of USB storage, should I care about hard disk cache? Simple powered-over-usb seems to have 8 mb cache, other (externally powered) has 16 for instance.

    Read the article

  • VMware Workstation 7&8&9 does not generate /etc/vmware/network upon installation

    - by dash17291
    When I install VMware Workstation on Arch linux Virtual ethernet is not working. $ sudo tail /var/log/vnetlib Aug 28 22:20:33 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:33 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:33 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:33 VNLNetCfgUnload - Requested cache is not loaded Database file is not present. Failed to initialize Aug 28 22:20:41 VNLFileExists - Cannot check for file or directory: /etc/vmware/networking , error: No such file or directory Aug 28 22:20:41 VNLNetCfgLoad - Import file does not exist Aug 28 22:20:41 VNL_Load - Error loading the vnet configuration, file used: /etc/vmware/networking Aug 28 22:20:41 VNLNetCfgUnload - Requested cache is not loaded Required modules compiled. Previously I have copied that file or directory (I don't remember) from a working installation, but now I need a real solution. It's strange for me, may be a hardware issue also because with Ubuntu the same thing happens on the same computer.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >