Search Results

Search found 25196 results on 1008 pages for 'hard drive cache'.

Page 52/1008 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Ubuntu on an XPS 14 Ultrabook with mSATA cache and 500GB HD - how to partition for dual boot?

    - by JDS
    I am getting an XPS 14 ( http://www.dell.com/us/p/xps-14-l421x/pd ) and I want to dual-boot Windows and Ubuntu. This thing has a 500GB standard HD and a 32GB mSATA that can be used as cache. Does anyone know how this thing is partitioned? Is the OS installed on the mSATA drive and data is on the big HD? Is there a BIOS controller or maybe even a Windows driver that makes the mSATA drive and 500GB HD appear contiguous? I get the impression that something makes the mSATA be used invisibly as cache, but I can't find any technical documentation how that works. My primary concern here is wrt dual-booting Ubuntu. I want to know if I need to partition the mSATA separately, or the big HD, or just partition the "magic" contiguous disk space that appears available to the OS.

    Read the article

  • Looking for an actual experience of RAID 5 2 drive failure?

    - by Brian
    I'm wondering if anyone has any personal experience of RAID 5 2 drive failure with large drives? As I understand it, the theory is that with large 1-2TB drives, if one drive fails in the raid set, it needs to rebuild everything so is thus hitting all the other drives very hard, and the chance of another failure goes up, especially if the drives were from the same manufacturing batch. And if you lose another drive, you lose all the data. This is usually explained after the statement "RAID is not backup" which I agree with. The theory of this makes sense, and I understand it, but does it really happen?

    Read the article

  • Using 12.04 installation as a persistent pen drive

    - by Cawas
    Disclaimer: I aim to build a self contained pen drive with my application inside, so no matter about updates. Maybe I'm looking at the wrong linux distribution to do this... Please let me know if you think so. I've tried knoppix and even lubuntu, but they don't come with enough "drivers" for Unity3D to work. Creating a custom live persistent pen drive is a real pain and I'm trying for 1 day without any success. Sure, being able to do it would probably be ideal and occupy the minimum space. Using the installation image on a pen drive, however, is good enough and is really easy to create. We can even do it from any OS, using UNetBootin, LiLi USB Creator or some other methods. Straight forward. Some recommend installing it on a pen drive. But that requires a lot of space and, I believe, it won't behave as good as something meant to be installed on a usb disk, because of memory management. So, there are only a few negative points on using the installation image that I can think of. Question here, is how to remove those drawbacks: Having to press "Try Ubuntu". That's the big one. Couldn't find how. Unable to load everything on memory and keep on running without the pen drive (like this) Unable to remove "Install Ubuntu 12.04 LTS" app. Setting the ISO to use maximum amount of space for the OS will leave pen drive with zero space left and any file saved within it from ubuntu is inaccessible from the outside (when plugin the pen drive and not booting from it). Am I missing something? Can those points be fixed?

    Read the article

  • LiveCD/USB boot issues with Ubuntu 12.04 on blank drive

    - by Richek
    Not sure how common this issue is, or even how badly I may be missing something simple, but I am a first time usuer having some serious problems. Some background: old HDD running Windows 7 developed too many bad sectors and is bricked. I'm attempting to install Ubuntu 12.04 on a fresh 1TB drive by booting from a liveCD USB flash drive. I've not been able to get past the initial menu screen, however, as the process stalls out shortly after selecting an option (both boot from drive and install to drive). I've tried multiple USB drives as well as CDs, modified the boot order, flashed BIOS, and even tried booting with only the flash drive and the keyboard connected with the same results.Typically what I observe is that the OS begins what I think is compliling, listing drivers and components before freezing on one. When the keyboard is plugged in, its the keyboard driver, before I flashed BIOS, it was a BIOS related item, now its an unknown entry. The computer seems to be reading the drive (idicated by USB light flashing or CD drive reving) for roughly 10 minutes with no progress, followed by the drives going quiet. Some spec info: Motherboard: ASUS P5Q Pro, BIOS version 2102 (latest version), Intel chipset CPU: Intel Core 2 Duo E8400 Wolfdale 3.0GHz help would be appriciated!

    Read the article

  • Documents stored on separate internal drive, Ubuntu doesn't notice on startup

    - by PlanoAlto
    My machine has Windows 7 Ultimate x64 and Ubuntu 12.04 LTS running side-by-side on a single hard drive with GRUB bootloader, each with 500 GB storage. I keep my personal documents on a separate 1TB hard drive so they remain isolated from any changes I make to the OS drive, but when Ubuntu starts it does not seem to notice my documents drive. While I've installed and worked with Ubuntu 12.04 Server x32 before, using it as a desktop OS is new to me. I use my documents drive for all of my personal data, including wallpapers and music, so it is imperative that Ubuntu recognize it on startup. Concerning the two specific examples: Ubuntu loads with the default blue-colored desktop instead of my desired picture of the spectacular Carina galaxy. When I right-click the desktop and select "Change Desktop Background", it wakes up from its amnesia and loads the proper background. As for my music, Rhythmbox defaults to an empty library upon reboot, forcing me to reload the settings manually each time. This gets quite tedious because I certainly can't work to my full potential without my music. The second thing I would like to address is making Ubuntu point the documents directories in ~ to their appropriate counterparts on the 1TB documents drive. I realize that this question is not new, but when I create the symbolical links, they established themselves inside the directories and did not convert the directories themselves into symbolical links. I also prefer not to move the files themselves from their current location on the 1TB drive. I believe this would also help the Rhythmbox library problem as well considering it's a default directory for the music player. Excerpt from fstab: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb6 during installation UUID=057ac83e-76ad-460d-86e5-b6d46e9b1d80 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb7 during installation #UUID=1183df90-23fc-44e4-aa17-4e7c9865d5cb none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 That's enough content for one question. I really like the Ubuntu experience so far since it doesn't treat me like an idiot out of the box (can't say the same for Windows) so I can't wait to hear from the community! Thanks for your help in advance.

    Read the article

  • installing linux froom usb pen drive

    - by zulu
    I'm new to Linux. I'm using Ubuntu 11.04. Now i want to install Ubuntu 12.04 . I got an ISO image of Ubuntu 12.04 Desktop. I put this image in to a pen drive which is formated,set the boot option boot from usb but nothing happened . I searched this over the net and on Ubuntu website but nobody has given the complete steps . someone say u can install from the Ubuntu also ,someone says u can do a fresh installation from usb pen drive u need to make you pen drive bootable etc. etc. . My problem is that i don't know the exact steps how ton install Ubuntu from usb pen drive? All I want to do is to completely remove my Ubuntu 11.04 and install Ubuntu 12.04 from usb pen-drive. Can any body tell me how to make a pen drive bootable ? How to install Ubuntu 12.04 from pen-drive? Please give me a step by step procedure. please explain me how to do it step by step . Thanx in advance

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Linux - How to control Winbind Authentication cache timeout

    - by cybervedaa
    I have configured my linux machines (running CentOS 5.2) to authenticate against a Windows server running Active Directory. I have even enabled winbind offline logon. Everything works as expected, however I'm also looking to impose a TTL for the winbind authentication cache. So far all I found was the below snippet from the samba documentation winbind cache time (G) This parameter specifies the number of seconds the winbindd(8) daemon will cache user and group information before querying a Windows NT server again. **This does not apply to authentication requests**, these are always evaluated in real time unless the winbind offline logon option has been enabled. Default: winbind cache time = 300 Clearly the winbind cache time parameter does not control the cache TTL for authentication requests. Is there any other way I can implement a cache timeout for winbind authentication requests? Thank you

    Read the article

  • Detect a USB drive being inserted - Windows Service

    - by Tom Bell
    I am trying to detect a USB disk drive being inserted within a Windows Service, I have done this as a normal Windows application. The problem is the following code doesn't work for volumes. Registering the device notification: DEV_BROADCAST_DEVICEINTERFACE notificationFilter; HDEVNOTIFY hDeviceNotify = NULL; ::ZeroMemory(&notificationFilter, sizeof(notificationFilter)); notificationFilter.dbcc_size = sizeof(DEV_BROADCAST_DEVICEINTERFACE); notificationFilter.dbcc_devicetype = DBT_DEVTYP_DEVICEINTERFACE; notificationFilter.dbcc_classguid = ::GUID_DEVINTERFACE_VOLUME; hDeviceNotify = ::RegisterDeviceNotification(g_serviceStatusHandle, &notificationFilter, DEVICE_NOTIFY_SERVICE_HANDLE); The code from the ServiceControlHandlerEx function: case SERVICE_CONTROL_DEVICEEVENT: PDEV_BROADCAST_HDR pBroadcastHdr = (PDEV_BROADCAST_HDR)lpEventData; switch (dwEventType) { case DBT_DEVICEARRIVAL: ::MessageBox(NULL, "A Device has been plugged in.", "Pounce", MB_OK | MB_ICONINFORMATION); switch (pBroadcastHdr->dbch_devicetype) { case DBT_DEVTYP_DEVICEINTERFACE: PDEV_BROADCAST_DEVICEINTERFACE pDevInt = (PDEV_BROADCAST_DEVICEINTERFACE)pBroadcastHdr; if (::IsEqualGUID(pDevInt->dbcc_classguid, GUID_DEVINTERFACE_VOLUME)) { PDEV_BROADCAST_VOLUME pVol = (PDEV_BROADCAST_VOLUME)pDevInt; char szMsg[80]; char cDriveLetter = ::GetDriveLetter(pVol->dbcv_unitmask); ::wsprintfA(szMsg, "USB disk drive with the drive letter '%c:' has been inserted.", cDriveLetter); ::MessageBoxA(NULL, szMsg, "Pounce", MB_OK | MB_ICONINFORMATION); } } return NO_ERROR; } In a Windows application I am able to get the DBT_DEVTYP_VOLUME in dbch_devicetype, however this isn't present in a Windows Service implementation. Has anyone seen or heard of a solution to this problem, without the obvious, rewrite as a Windows application?

    Read the article

  • How to download media content on demand and reuse from browser cache in silverlight

    - by Andrew
    Hi. I have a problem with simple silverlight app, this app has a couple of buttons, each button sets mediaelement source to a short mp3 file and plays it, my problem is that when i press the same button second time it re-downloads mp3 file again but i think it shouldn't, instead it should use a copy of browser cached mp3 file that was downloaded when a button was pressed for the first time. I'm using sl4 and links in mediaelement are just simple uri's, i need to make it working in this way that when some mp3 was downloaded it will be cached on the client browser and further click on button will use a cached version of file instead of downloading it again and wasting my bandwidth. Any ideas ?

    Read the article

  • Best way to cache resized images using PHP and MySQL

    - by Chris Hawes
    What would be the best practice way to handle the caching of images using PHP. The filename is currently stored in a MySQL database which is renamed to a GUID on upload, along with the original filename and alt tag. When the image is put into the HTML pages it is done so using a url such as '/images/get/200x200/{guid}.jpg which is rewritten to a php script. This allows my designers to specify (roughly - the source image maybe smaller) the file size. The php script then creates a hash of the size (200x200 in the url) and the GUID filename and if the file has been generated before (file with the name of the hash exists in TMP directory) sends the file from the application TMP directory. If the hashed filename does not exist, then it is created, written to disk and served up in the same manner, Is this efficient as it could be? (It also supports watermarking the images and the watermarking settings are stored in the hash as well, but thats out of scope for this.)

    Read the article

  • Accessing ruby counter cache

    - by Julian
    Hi all, I'm playing around with a fork of acts_as_taggable_on_steroids as a learning exercise. The version I'm looking at does some stuff I don't understand to calculate Tag counts. So I thought I'd do a version using PORC (Plain Old Rails Counters): class Tagging < ActiveRecord::Base #:nodoc: belongs_to :tag, :counter_cache => "tagging_counter_cache" ... I thought tagging_counter_cache was transparently accessed when I access tag.taggings.count but apparently not? Do I really have to access tag.tagging_counter_cache explicitly? >> tag.taggings.count SQL (0.7ms) SELECT count(*) AS count_all FROM `taggings` WHERE (`taggings`.tag_id = 16) Same for size. It's cool if that's the case but just wanted to check.

    Read the article

  • Gravatar server cache

    - by Santa
    Does anyone know how and when Gravatar refreshes their icon caches? I changed my gravatar image for an email about a week ago. For the most part, my profiles that use it have had their avatar icons of various sizes refreshed, except one. In particular, the following URIs, while using the exact same email hash, fetch two completely different images: http://www.gravatar.com/avatar/73166d43fc3b2dc5f56669ce27984ad0?d=identicon&s=35 http://www.gravatar.com/avatar/73166d43fc3b2dc5f56669ce27984ad0?s=35&d=identicon

    Read the article

  • How do I use the subversion revision of a css file to prevent browser cache

    - by Clayton
    StackOverflow implements it like this: <link rel="stylesheet" href="http://sstatic.net/so/all.css?v=4542"> Every time the referenced files change, the href attribute of the link tag is updated in the HTML code, thus supporting caching and updated referenced files. My question - how do you retrieve the subversion version of that css file to include in the link? Subversion keywords only tell you the revision of the file you are currently in. I'm working with PHP/CodeIgniter + jQuery.

    Read the article

  • Is this a cache problem? (JQUERY/OPERA)

    - by Scarface
    Hey guys quick question. I have this code that brings in data from a php script and it works fine in fire fox and mostly opera except one problem. In opera if I keep refreshing the page, once in a while the information will not appear at all. Is this possible to fix? Thanks in advance for any assistance. $.getJSON(files+"comments.php?action=view&load=initial&topic_id="+topic_id+"&t=" + (new Date()), function(json) { if(json.length) { for(i=0; i < json.length; i++) { $('#comment-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } } });

    Read the article

  • How to implement a Counter Cache in Rails?

    - by yuval
    I have a posts controller and a comments controller. Post has many comments, and comments belong to Post. The associate is set up with the counter_cache option turned on as such: #Inside post.rb has_many :comments #Inside comment.rb belongs_to :post, :counter_cache => true I have a comments_count column in my posts table that is defaulted to zero, as such: add_column :posts, :comments_count, :integer, :default => 0 In the create action of my comments controller, I have the following code: def create @posts = Post.find(params[:post_id]) @comment = @post.comments.build(params[:comment]) if @comment.save redirect_to root else render :action => 'new' end end My problem: when @comment.save is called, I get the following error: ArgumentError in CommentsController#create wrong number of arguments (2 for 0) Removing :counter_cache => true from comment.rb completely solves the problem, so I'm assuming that it is the cause of this vague error. What am I missing here? How can I save my comment and still have rails take care of my counter_cache for my post? Thanks!

    Read the article

  • phpThumb cache problems

    - by Cabeludo
    I'm using phpThumb - the PHP thumbnail generator. 'phpThumb.config.php': $PHPTHUMB_CONFIG['cache_maxage'] = 10; $PHPTHUMB_CONFIG['cache_maxsize'] = 1000; $PHPTHUMB_CONFIG['cache_maxfiles'] = 10; but it does nothing... I've got 108 MB in 922 files... and it keeps growing. Thanks for any suggestions.

    Read the article

  • Cache Auth Tokens (or Caching HTTP headers in General) - Best Practices

    - by viatropos
    I'm using the Ruby GData Library to access Google Docs and I recently got the GData::Client::CaptchaError because I was re-logging in with every request. Reading this post, it recommends not logging in with every request, but caching the authentication token. How do I go about doing that correctly? Google says it expires every 24 hours, and it doesn't seem like I should store it in the session, so what should I do? I'm using Ruby on Rails with all this. Thanks so much

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • How to get the SMTP response in CACHE

    - by praveenjayapal
    Hi friends, I want to retrive the SMTP response after sending the mail. I need to fetch the response for the send mail (whether its send properly or not) The response must be like this Return-path: itgigs@4wtech.com Envelope-to: [email protected] Delivery-date: Fri, 12 Dec 2008 23:54:57 -0500 Received: from pool-98-109-89-94.nwrknj.fios.verizon.net ([98.109.89.94] helo=Andy-PC) by server.4wtech.com with esmtp (Exim 4.69) (envelope-from itgigs@4wtech.com) id 1LBMWn-0005BH-7u for [email protected]; Fri, 12 Dec 2008 23:54:57 -0500 Date: Sat, 13 Dec 2008 04:55:09 UT From: [email protected] Subject: Web Deverloper Internship (SoHo) (955259288 ) To: [email protected] MIME-Version: 1.0 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: quoted-printable How can i retrieve the SMTP response? Please help me

    Read the article

  • Why this cache doesn't work using final as modifier

    - by Pentium10
    I have this code to get the Cursor once for this instance, and the Log shows it is called many times although I marked as final. What I am missing? private Cursor getAllContactsCached() { final Cursor c=this.getList(); return c; } getAllContactsCached method should retrieve list once, and the 2nd time it should reuse the final object for return

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >