Search Results

Search found 8997 results on 360 pages for 'apt cache'.

Page 10/360 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How do I handle 3rd party search result data (via cache)

    - by reikyoushin
    I have a search function on my site and it is taking data from 6 different 3rd party resources. The problem is, it takes too long requesting the data over and over again on the results page. I've read for questions like this on SO about session not being a good choice but for me 'memcache' is not an option, because the server doesn't have memcached installed and I have no way to install it now. Is there any other approach to do this? Storing in the database seem inappropriate because the data depends on the search terms requested. What I've been thinking is writing a file on the server that would act as a cache for this file but I don't know how I would know when to delete it after.

    Read the article

  • apt-get : trying to resolve ipv6

    - by llazzaro
    Hello, My problem is that when I do an apt-get update is trying to resolve ipv6 ips, and its failing to update/install/etc. How to disable ipv6 at apt-get? The ping to www.google.com.ar, resolves to an ipv4. And this is debian runnung in an OpenVZ container. Thanks!

    Read the article

  • How secure are third party Ubuntu (APT) repository mirrors

    - by bakytn
    Hello! We have locally an Ubuntu mirrors to save a lot of traffic (our external traffic is not free) So whenever I apt-get install "program" it gets from that repository. the question is...basically they can substitute any package with their own? So it's 100% on my own risk and I can be hacked easily on any apt-get upgrade or a-g install or a-g dist-upgrade? for example the very basic ones like "telnet" or any other.

    Read the article

  • add packages to squid-deb-proxy cache

    - by zpletan
    To save bandwidth and data on my Internet plan, I have installed squid-deb-proxy on a desktop, and the client on it and a few other machines I've got. However, based on the post that put me onto this , it sounds like if I take my laptop to a different network and update it there, the downloaded updates will NOT be automatically copied back to the squid-deb-proxy server when I get back on my network. Assuming that this is correct (I will be testing later), is there a way I can stick these packages into the cache so I don't have to download them one more time for other machines in the network?

    Read the article

  • How can I download a package and all of its dependencies with apt-get

    - by Velislav Marinov
    I'm using Ubuntu 12.04 and I would like to use apt-get to download a package and all of it's depenedcies. Those packages will have to be installed on computers with no internet connection, so in addition to the base package I also need to all of the package's dependcies as well. Is there an easy way to do this (like in muon package manager)? I now that I can use the apt-get download command for this, but I don't want to manually specify each package that muon recommends to install or upgrade.

    Read the article

  • joomla sometimes messes up urls, probably cache involved

    - by Bakaburg
    Is a bit i'm having this problem and i really cannot get the hang of it... Every once in while my joomla site messes up links url and for example from something like this: http://www.sism.org/index.php?option=com_comprofiler&task=userslist&listid=4&Itemid=123 it becomes like this: http://www.sism.org/index.php/component/k2/administrator/components/com_dump/assets/css/images/stories/inrilievo/sism/htm/index.php?option=com_comprofiler&task=userslist&listid=4&Itemid=123 the new page has the right content but there are no css and other linked resources. Usually i solve the problem by deleting all the cache and turning it off and on again. Of course this is pretty annoying especially for my association. Does any one have any clue on this? Watching the URLs the components involved seems to be K2 and Jdump. Thanks

    Read the article

  • Tell the kernel to strongly cache a particular directory

    - by silviot
    This question is a rephrasing of Optimizing EXT4 performance. I have a directory that contains build files, most very small, but totaling 5.6G. I usually access the same subset of files (some thousands, for some tens of megabytes) over and over again. The subset changes daily (different projects, different versions of libraries). What takes longer when I use it seem to be disk seeks. For example if I do a du twice the second time it takes as much time as the first, and disk activity is similar. Ideally I'd like to tell the kernel to allocate X Mb to the metadata and Y to data in the folder, like the options for nfs cache. Is it possible in some way, other than mounting nfs from localhost and caching it to a ramdisk?

    Read the article

  • How can I accept the agreement for ttf-mscorefonts-installer?

    - by Magic
    After a recent update, ttf-mscorefonts-installer prompted me to accept its license agreement. For some reason my terminal will not allow me to accept, or for some reason I am pressing the wrong hotkey... I've tried every letter on the keyboard and Enter among others... I'm sure there is a very simple and obvious solution to this. I've also just tried to remove the package completely however the terminal states that due to the package not being correctly installed, I should reinstall the package before removing it. Very frustrating! Essentially, because I cannot successfully install this package, I can't really ever upgrade my system because I always have to end up terminating the terminal with the license agreement (thus the upgrade fails).

    Read the article

  • Can't remove the libpcap0.8 package

    - by Yogesh
    I am getting error when running apt-get remove root@System:~/Downloads# sudo apt-get remove The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. and when I ran apt-get remove -f this is what happens: root@System:~/Downloads# sudo apt-get remove -f The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# clear root@System:~/Downloads# sudo apt-get remove -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# root@System:~/Downloads# sudo apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. root@System:~/Downloads# apt-cache policy libpcap0.8:amd64 libpcap0.8 libpcap0.8-dev libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8-dev: Installed: 1.5.3-2 Candidate: 1.5.3-2 Version table: *** 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status root@System:~/Downloads# root@System:~/Downloads# sudo apt-get -f remove libpcap0.8 libpcap0.8-dev libpcap0.8-dev:i386 libpcap0.8:i386 Reading package lists... Done Building dependency tree Reading state information... Done Package 'libpcap0.8-dev:i386' is not installed, so not removed. Did you mean 'libpcap0.8-dev'? You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libpcap-dev : Depends: libpcap0.8-dev but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). root@System:~/Downloads# sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads#

    Read the article

  • Is there a debian lenny patch to allow apt-get to work with sftp?

    - by MiniQuark
    I would like to write things like this in /etc/apt/sources.list: deb sftp://[email protected]/path other stuff When I try this, apt-get complains that there is no sftp method for apt: # apt-get update E: The method driver /usr/lib/apt/methods/sftp could not be found. Has anyone written a patch to add the sftp method for apt? All I could find in Google was this spec for Ubuntu. Thanks for your help.

    Read the article

  • Open table cache in MySQL

    - by vvanscherpenseel
    I have my open table cache set to 1800 and I have a total of 1112 tables. MySQL Tuning Primer reports that 100% of my table cache is used yet my table cache hit rate is 5%. I understand that this happens due to concurrent connections all opening tables. I think I should raise the cache limit. I understand that the cache size is limited by the file descriptor limit of my operating system, but are there any other practical limitations I should be aware of? Searching Google or this very website yields mostly posts explaining the connection-factor or come up with indecisive answers. My question: can I safely increase the open table cache limit? Is there a maximum?

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • SSL connection hangs as client hello (curl, openssl client, apt-get, wget, everything)

    - by Niklas B
    Hi, I've run into a problem on my Debian VPS (a xen domU) regarding SSL. Namely almost all SSL connections hangs at client hello. For example: # curl -vI https://graph.facebook.com About to connect() to graph.facebook.com port 443 (#0) Trying 66.220.146.48... connected Connected to graph.facebook.com (66.220.146.48) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs SSLv3, TLS handshake, Client hello (1): It's the same when using the openssl client. However, some of the SSL traffic works (for example https://www.nordea.se). Server #uname -a Linux server.com 2.6.26-1-xen-amd64 #1 SMP Fri Mar 13 21:39:38 UTC 2009 x86_64 GNU/Linux It does however work on my Dom 0 (the main xen host). Apt-get I can't even run apt-get update with the debian security sources (hangs on reading headers) Open SSL At the begining I thought I had an old openssl client (0.9.8o-4) since I appeared to have a newer on the Dom 0 (0.9.8g-15+lenny8) but doing a manuanl update on the openssl deb didn't help. Open SSL Client This is the full output of when the openssl client hangs: http://pastebin.com/PAjwMap9 Closing thoughts I've Googled the crap out of this, and I'm not getting any further. I've seen problems with curl, apt-get etc. but they are all specific relating to the very application - not general for the system. Any thoughts?

    Read the article

  • Elastic versus Distributed in caching.

    - by Mike Reys
    Until now, I hadn't heard about Elastic Caching yet. Today I read Mike Gualtieri's Blog entry. I immediately thought about Oracle Coherence and got a little scare throughout the reading. Elastic Caching is the next step after Distributed Caching. As we've always positioned Coherence as a Distributed Cache, I thought for a brief instance that Oracle had missed a new trend/technology. But then I started reading the characteristics of an Elastic Cache. Forrester definition: Software infrastructure that provides application developers with data caching services that are distributed across two or more server nodes that consistently perform as volumes grow can be scaled without downtime provide a range of fault-tolerance levels Hey wait a minute, doesn't Coherence fullfill all these requirements? Oh yes, I think it does! The next defintion in the article is about Elastic Application Platforms. This is mainly more of the same with the addition of code execution. Now there is analytics functionality in Oracle Coherence. The analytics capability provides data-centric functions like distributed aggregation, searching and sorting. Coherence also provides continuous querying and event-handling. I think that when it comes to providing an Elastic Application Platform (as in the Forrester definition), Oracle is close, nearly there. And what's more, as Elastic Platform is the next big thing towards the big C word, Oracle Coherence makes you cloud-ready ;-) There you go! Find more info on Oracle Coherence here.

    Read the article

  • How do I empty Drupal Cache (without Devel)

    - by alexanderpas
    Okay... Seems i can't find it with google... so here you go SO ;) How do i empty the drupal caches: without the Devel module without running some PHP Statement in a new node etc. without going into the database itself Effectively, how do you instruct a luser to clear his caches

    Read the article

  • permission denied: /etc/apt/sources.list

    - by Eli
    I'm trying to install java jre, i usually do it like this sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 5CB26B26 sudo apt-get update sudo apt-get install update-sun-jre exit but when i do sudo echo 'deb http://www.duinsoft.nl/pkg debs all' >> /etc/apt/sources.list i see permission denied: /etc/apt/sources.list When i do ls -l /etc/apt/sources.list i see -rw-r--r-- 1 root root 3360 Aug 26 01:45 /etc/apt/sources.list When i do sudo mv /etc/apt/sources.list /etc/apt/sources.list.old sudo cat /etc/apt/sources.list.old | sudo tee /etc/apt/sources.list i see #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ #deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise universe deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise multiverse deb http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://lb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main and the issue is not solved, i still see that permission error, I'm on a 64 bit laptop

    Read the article

  • How do I fix dependency problems with the kernel in apt?

    - by Jon
    When trying to install new packages, either manually or with muon, I get these errors: jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get install kupfer [sudo] password for jon: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kupfer : Depends: python-keybinder but it is not going to be installed Recommends: python-wnck but it is not going to be installed linux-headers-generic : Depends: linux-headers-3.2.0-20-generic but it is not installable linux-image-generic : Depends: linux-image-3.2.0-20-generic but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic linux-headers-generic linux-image-generic The following packages will be upgraded: linux-generic linux-headers-generic linux-image-generic 3 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 3 not fully installed or removed. Need to get 0 B/6,658 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-20-generic; however: Package linux-image-3.2.0-20-generic is not installed. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.20.22); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-headers-generic: linux-headers-generic depends on linux-headers-3.2.0-20-generic; however: Package linux-headers-3.2.0-20-generic is not installed. dpkg: error processing linux-headers-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-image-generic linux-generic linux-headers-generic E: Sub-process /usr/bin/dpkg returned an error code (1) As indicated above, I ran sudo apt-get -f install but it still tells me there are dependency issues.

    Read the article

  • Buffer cache spillover: only buffers

    - by Liu Maclean(???)
    ?????? (?database open)????recovery?redo??,??????????????????: > Buffer cache spillover: only 32768 of 117227 buffers available$ ????crash recovery?redo apply??????buffer cache,??buffer cache???recovery buffer? ?????recovery buffer,????????age out????????redo change??????????? ????????????recovery(???crash recovery??instance recovery),?????buffer cache ??????????????(recovery)???? ??????(lock claim phase),?????????buffer cache ??spillover??????(???swap)??????????????????,oracle???????redo change???????????buffer cache????????recovery buffer????????,???????????????????? ?????????????(spillover recovery)??????????????(?????????,?????????redo apply),???????????? ??????,??crash recovery???? buffer cache??????????; ?????????, ?????????????buffer cache????????,?????????????????????????,?????????????????db_cache_size???????,??alter database open??????(Buffer cache spillover: only 32768 of 117227 buffers available$),?????????????????? ????????buffer cache spillover,?????????10???????????????,????????alert.log 30??????????,?????spillover???hang,?????????OTN Ask Maclean

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • Cache Control Headers with IIS 7.5

    - by Brad
    I'm trying to wrap my head around client side (web browser) caching and how it works in relation to IIS 7.5 cache control headers. In particular: If we want to force clients to reload cached resources, how must IIS be configured? Do we need to set expire web content immediately if the resources on the server have a more recent Modified Date (or ETag value)? Right now we're not setting any cache headers. So if I set a cache header of no-cache (which I think is the equivalent of expire web content immediately) will that force the web browser to obtain a new version of a particular file. Or will the browser only request a new version after it deems its current copy to be stale and then from that point forward not cache it? Would a best practice be to set a cache control flag of 1 week, then 8 days before I know I am going to make a change set the cache control down to for instance 30 minutes? But if I do that and then need to immediately expire an item from users caches because there was an issue with it how do I do that?

    Read the article

  • Why do apt-get and wget fail on my server when ping is working?

    - by klox
    Yesterday my server still OK, but today after try to sudo apt-get update i got this error: update process. I try: sudo rm /var/lib/apt/lists/* -vf And got This.Then try update again, but it's not solving my problem then show May be still same error. I checked my internet connection try ping google.com, get result : PING google.com (74.125.235.40) 56(84) bytes of data. From 136.198.117.254: icmp_seq=1 Redirect Network(New nexthop: fw1.jvc-jein.co.id (136.198.117.6)) 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=1 ttl=53 time=20.6 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=2 ttl=53 time=18.2 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=3 ttl=53 time=33.0 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=4 ttl=53 time=30.0 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=5 ttl=53 time=28.1 ms In some sites said that may be it caused by getdeb server is down. try to install: jeinqa@SVRQAR:~$ sudo apt-get install pastebinit Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_restricted_binary-amd64_Packages E: The package lists or status file could not be parsed or opened. try : sudo ufw status verbose result : Status: inactive

    Read the article

  • Why is apt-get --auto-remove not removing all dependencies?

    - by Mike
    I just installed a package (dansguardian in this case) and apt told me that I had unmet dependencies. # sudo apt-get install dansguardian Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: clamav clamav-base clamav-freshclam libclamav6 libtommath0 Suggested packages: clamav-docs squid libclamunrar6 The following NEW packages will be installed: clamav clamav-base clamav-freshclam dansguardian libclamav6 libtommath0 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/4,956 kB of archives. After this operation, 14.4 MB of additional disk space will be used. Do you want to continue [Y/n]? So I installed it and the dependencies. So far so good. Later on, I decide that this package just isn't the package for me, so I want to remove it and all of the other junk it installed with it since I'm not going to be needing any of it: # sudo apt-get remove --auto-remove --purge dansguardian Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: dansguardian 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 1,816 kB disk space will be freed. Do you want to continue [Y/n]? However it is only removing that one specific package. What about clamav clamav-base clamav-freshclam libclamav6 libtommath0? Not only did it not remove them, but clamav was actually running a daemon that loads every time the computer boots. I thought that --auto-remove would remove not only the packages, but also the dependencies that were installed with it. So basically, without going through the apt history log file (if I even remember to do so, or if I even remember that a specific package I installed 3 months ago had dependencies along with it), is there a way to remove a package and all of the other dependencies that were installed like in this case?

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Difference between RPM (yum) and apt-get

    - by Josh K
    Functional difference between the two? Packages different style or what? I'm dipping my toe in the server pool and playing with an Ubuntu install right now, which is apt-get. I'm also considering FreeBSD and Debian if I do decide to start running my own VPS. So far things have been very easy, sudo apt-get install apache2 and the like with no issues at all. I'd like to know if there is a different learning curve to yum or variants.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >