Search Results

Search found 81493 results on 3260 pages for 'file size'.

Page 730/3260 | < Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >

  • Deloitte 2013 Global Contact Center Survey

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 "77% of contact centers expect to maintain or grow in size in the next 12-24 months." This is one of the findings of Deloitte's 2013 Global Contact Center Survey in which there are plenty of great business opportunities for all smart CX consultants and integrators using Oracle Service solutions. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Is it possible to add your own bookmarks/tabs to a PDF file?

    - by Pure.Krome
    Hi folks, I've purchases a few e-books and love it. Some come with a massive list of bookmarks (kewl!) and some not. Regardless, is there a way i can create my OWN bookmarks so i can jump to specific pages? I don't want to mess up the current list of official bookmarks that came with the e-books (where they were provided). It's like i want to add my own sticky note tabs so i can quickly jump between pages etc, without having to remember the page number. Also, this is for Adobe reader (the free thingy). If it's available in another program (eg. Foxit, please say so also :) ) cheers!

    Read the article

  • Problem with intel chipset 4 serie and centos dealing with dual head

    - by Antoine
    I've a fujitsu lifebook S7220, it's been a while since i try to configure it to use a dual head with centos 5.4 x86_64. Everytime I try, the xserver crash... I've got an intel chipset mobile 4 serie (GMA 4500MHD, if I recall good!) When I do an lspci -v i've got these : 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) (prog-if 00 [VGA controller]) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0, IRQ 177 Memory at f2000000 (64-bit, non-prefetchable) [size=4M] Memory at d0000000 (64-bit, prefetchable) [size=256M] I/O ports at 1800 [size=8] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 3 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0 Memory at f2400000 (64-bit, non-prefetchable) [size=1M] Capabilities: [d0] Power Management version 3 My question is, anyone already got this problem and how did you fix it? Thank you for your answer!

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • How to make sure rsync sets proper file permissions?

    - by BetaRide
    I'm transfering data from a synology box to a debian box with rsync. Unfortunately the permissions of all transfered files are set to rwxrwxrwx on the debian box. I want to make sure this files can be seen by the owning user only. Is there a way to tell the debian box to set the permissions to something like rwx------? The rsync job is set-up through the DSM GUI. If possible I'd rather avoid hacking the synology box and do something on the command line. This means I'm looking for a way to set the permissions on the server side (debian box). I'm using the latest DSM version (4.1).

    Read the article

  • How to resize the disk of a Fedora guest VM in VMWare ESXi

    - by Cerin
    How do I resize (specifically increase) the disk size of a Fedora guest VM running under VMWare ESXi 4.1? I have a Fedora 16 VM with an ext4 formatted disk, and I've increased its disk size using the vSphere client from 50GB to about 250GB. I rebooted the guest, and it correctly shows this size using fdisk -l /dev/sda. However, df -H still shows the old size. I've found a few KB articles explaining how to resize partitions for some flavors of Linux, but nothing for Fedora with ext4. That article seems to imply I have to create a completely new partition, and that I can't simply expand the existing partition. Using Gparted, it also prevents me from simply resizing the existing partition. Is this impossible to do under Linux?

    Read the article

  • Find actual Centos6 path for %{_includedir} in spec file?

    - by Dayo
    I am trying to find out which path actually resolves to %{_includedir} in a Centos6 installation. I understand that this is normally "/usr/include" but where can I find where it is actually set or somehow "echo" it? Basically, a spec I am using has "%dir %{_includedir}/someFolder/someFile". Everything runs fine but I can't find "/usr/include/someFolder". I assume it has been created somewhere else and I am trying to find out where that is.

    Read the article

  • Which Large File System Format to use for USB Flash drive compatible with Ubuntu/Mac/Windows?

    - by wajiw
    I've had this problem for a long time and can't find a solution. I switch between the 3 OSes all the time and use a 1TB USB Drive to do so. I can't seem to find a format that is compatible across all systems that handles large files (at least 8-9 GB). Does anyone have a solution for this? Recently I've tried exFat but that messes up the filesystem when trying to read on windows after adding files from Ubuntu (using the fuse driver). The OSes currently I'm using are Windows Vista/7, Mac OS X (10.6.5) and Ubuntu 10.10

    Read the article

  • How can I remove unallocated space from a SQL Server database?

    - by Dynamo
    I have a database that was recently shrunk and when I run sp_spaceused I see that it has 500MB of unallocated space. I'm trying to keep this database to a certain size (do to MSDE size restrictions for my desktop users) and I'm not sure if the unallocated space affects the overall database size. Is there a way to remove this unallocated space from the database?

    Read the article

  • tring to edit brightness

    - by Martin Mobbs
    i'm tring to edit the /sys/class/backlight/max_brightness file to stop ubuntu 12 returning to maximum brightness on each reboot. gedit won't save the file after i have modified it i have used chown to change ownership to me which was successful. I then changed gedits settings so it won't save a backup but it still won't save. It returns with this error, Could not save the file /sys/class/backlight/acpi_video0/max_brightness. Unexpected error: Error writing to file: Input/output error is this yet another bug?

    Read the article

  • What does the @ symbol mean in a file's permission settings?

    - by Shiki
    I'm on MacOSX, I did ln -s on a directory and these are the results: -rwxrwxr-x@ 1 shiki admin 970332 Mar 6 16:38 apc.so -rwxrwxr-x@ 1 shiki admin 653884 Mar 6 16:38 eaccelerator.so -rw-rw-r--@ 1 shiki admin 60064 Mar 6 16:38 gettext.a -rwxrwxr-x@ 1 shiki admin 80320 Mar 6 16:38 gettext.so -rw-rw-r--@ 1 shiki admin 514784 Mar 6 16:38 imap.a -rwxrwxr-x@ 1 shiki admin 3886132 Mar 6 16:38 imap.so What do those @ symbols mean?

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Can't access external hard drives or thumb drives

    - by calden
    I am not a complete linux noob but I don't know a lot either and would greatly appreciate some help with this. I just installed Ubuntu 10.10 onto my laptop. Everything is working great however USB devices such as thumb drives and external hard drives wont show up. I have been looking around a bit and when I run sudo fdisk -l it displays this: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00065684 Device Boot Start End Blocks Id System /dev/sda1 * 1 29255 234983424 83 Linux /dev/sda2 29255 30402 9212929 5 Extended /dev/sda5 29255 30402 9212928 82 Linux swap / Solaris Disk /dev/sdb: 16.0 GB, 16026435072 bytes 64 heads, 32 sectors/track, 15283 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000df90d Device Boot Start End Blocks Id System /dev/sdb1 * 1 15283 15649776 7 HPFS/NTFS It does seem to display my 16 gig thumb drive but other then seeing it here I cant access it to read and write files to it. It does the same with my external hard drive. I know those devices work as I have tried them on my other computer and they are working fine. Also this is what is in fstab if this will help anybody help me: proc /proc proc nodev,noexec,nosuid 0 0 /dev/sdb1 / ext4 errors=remount-ro 0 1 /dev/sdb5 none swap sw 0 0 Thank you very much for the help everyone.

    Read the article

  • SD Card only mounted after a reboot

    - by hattenn
    I have a Kingston 2GB MicroSD and I plug it in via an inconix MicroSD Adapter to the internal card reader of my Samsung N210 Netbook with Ubuntu 10.10, but it doesn't show up. Only if I reboot the system when the card's plugged in it shows up. Why does it need a reboot for mounting? sudo fdisk -l gives the output below. But I can only see the drive when I reboot the computer while the card's plugged. Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9a5a7990 Device Boot Start End Blocks Id System /dev/sda1 1 1959 15728640 27 Unknown Partition 1 does not end on cylinder boundary. /dev/sda2 * 1959 1972 102400 7 HPFS/NTFS /dev/sda3 1972 18992 136718750 83 Linux /dev/sda4 18992 19458 3738625 5 Extended /dev/sda5 18992 19458 3738624 82 Linux swap / Solaris Disk /dev/sdb: 1973 MB, 1973420032 bytes 60 heads, 59 sectors/track, 1088 cylinders Units = cylinders of 3540 * 512 = 1812480 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1089 1927100+ 6 FAT16

    Read the article

  • How can I delete a specific file from a set of results using the find command in Linux?

    - by PeanutsMonkey
    I have the following command that lists all files with the extension doc, docx, etc. find . -maxdepth 1 -iname \*.doc\* The command returns numerous files some of which I would like to delete. So for example the results returned are Example.docx Dummydata.doc Sample.doc I would like to delete Sample.doc and Dummydata.docx. How do I delete the files using the -exec option. Am I able to pass in the names of the files e.g. rm Dummydata.docx Sample.doc hence the command would look as follows find . -maxdepth 1 -iname \*.doc\* -exec rm Dummydata.docx Sample.doc Can I pass the names of the files within {} afterrm`? e.g. find . -maxdepth 1 -iname \*.doc\* -exec rm {Dummydata.docx} Sample.doc Is there a better way of doing it?

    Read the article

  • Diff 2 files while ignoring parts of lines

    - by Millianz
    I would like to diff a file system. Currently my bash script prints out the file system recursively into a file (ls -l -R) and diffs it with an expected output. An example for a line in this file would be: drw---- 100000f3 00000400 0 ./foo/ My current diff command is diff "$TEMP_LOG" "$DIFF_FILE_OUT" --strip-trailing-cr --changed-group-format='%' --unchanged-group-format='' "$SubLog" As you can see I ignore additional lines in the current output file, I only care about lines that match with the master output. I now have the problem though that some files may differ in size, or a folder might even have a different name, but due to it's location I know what access rights it should have. For example: Output: ------- 00000000 00000000 528 ./foo/bar.txt Master: ------- 00000000 00000000 200 ./foo/bar.txt Only the size differs here, and it doesn't matter, I would like to just ignore certain parts of the diff, kind of like an ansi c comment. Master: ------- 00000000 00000000 /*200*/ ./foo/bar.txt -- OR -- Master: d------ 00000000 00000000 /*10*/ ./foo//*123123*///*76456546*//bar.txt Output: d------ 00000000 00000000 0 ./foo/asd/sdf/bar.txt And still have it diff correctly. Is this even possible with diff, or will I have to write a custom script for it? Since I'm fairly new to cygwin I might be using the completely wrong tool all together, I'm happy for any suggestions. Update: Taking a step back, here is the general task at hand that I want to achieve. I want to write a script that checks the file system to see if the read/write permissions are set up correctly. The structure of the file system is under my control, so I don't have to worry about it changing too much. Sometimes folders/files might not be present, but if they are their permissions must be checked. For Example assume that the following is a snapshot of the current file system structure drw ./foo drw ./foo/bar -rw ./foow/bar/bar.txt drw ./foo/baz -rw ./foo/baz/baz.txt And this is what the file system structure might dictate, i.e. if these folders / files are present, the permissions must match. drw ./foo drw ./foo/bar -rw ./foo/bar/bar.txt --- ./foo/bar/foobar.txt drw ./foo/baz -rw ./foo/baz/foobaz.txt In this case the file system checked out ok, since all files present match their expected values. The situation becomes more complicated as soon as certain folders might have any arbitrary name, only due to their location I know what their permissions should be. Assume that the directory ./foo/bar in the above example might be such a case, i.e. instead of bar the folder could have any name, but still match the -rw permissions. This seems like a very complicated situation, and I'm not even sure if I can solve it with bash scripting alone. I might have to write an actual application.

    Read the article

  • "Can't find root filesystem / error mounting /dev/root" when booting to new kernel

    - by salparadise
    I am trying to upgrade my kernel from 2.6.18-274 to 2.6.39 for some wireless card drivers. When I boot into the new kernel I get the "Can't find root filesystem / error mounting /dev/root" googling led me to this page http://fedoraproject.org/wiki/Common_kernel_problems#Can.27t_find_root_filesystem_.2F_error_mounting_.2Fdev.2Froot From what I am reading seems to be an issue with a driver for my SATA controller or HD, but I can't find what option I need to add to the kernel. Doing a diff from the old initrd to the new one gives me the following: root-> diff /tmp/kafter /tmp/kbefore 6a7,8 > lib/dm-message.ko > lib/dm-region_hash.ko 8a11 > lib/dm-raid45.ko 13d15 < lib/dm-region-hash.ko 16a19 > lib/dm-mem-cache.ko Do I need any of those? not sure if I would need dm-raid45.ko as I am not running a raid. I have the same SATA and IDE options configured for both kernels so not sure what else to look for, any help is appreciated. Additionally here is the HW info: 00:1f.2 IDE interface: Intel Corporation 82801FB/FW (ICH6/ICH6W) SATA Controller (rev 03) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: Hewlett-Packard Company Unknown device 3006 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 233 I/O ports at 1818 [size=8] I/O ports at 1830 [size=4] I/O ports at 1820 [size=8] I/O ports at 1834 [size=4] I/O ports at 14f0 [size=16] Capabilities: [70] Power Management version 2 root-> smartctl -a /dev/sda ... === START OF INFORMATION SECTION === Device Model: WDC WD5000AADS-00S9B0

    Read the article

  • Web Development - How to access custom host, defined in my hosts file, from another device in the same network

    - by Neara
    Ok, I hope i'll be able to explain the issue im experiencing. I'm working on a project, that has 2 parts: one takes all requests from usual localhost, the other handles requests from myhost.local. While trying to access both addresses from my computer, it works ok. But now i need to test myhost.local on mobile devices, connected to the same network. Usually i would just run server from my computer ip in the network: python manage.py runserver 10.0.0.8:8000 And then from any device, going to 10.0.0.8:8000 would show the project im working on. However, now accessing that ip address routes me straight to localhost. So, my question is how to access myhost.local from another device in same network? I don't want to change router settings, if that can be avoided, cos sometimes i work from places where i can't access router admin. Is there any network settings on my computer, that i can change to fix the routing to myhost.local w.o losing access to localhost as well?

    Read the article

  • How to connect to windows pptp vpn?

    - by Behzadsh
    The VPN Server gave me an exe file - connection manager - to connect to the server. I created a pptp vpn connection under nm-applet, I only entered Host, Username & password. but later I figured out there are more option to set. I extract .exe file, and in a .cms file I found someoption, but I don't know how to set them in ubuntu here is the file content http://pastebin.com/FmgkFBcS Sorry for my bad English

    Read the article

  • Outlook Signature Broken in Entourage

    - by Eric J.
    Some of our company uses Windows with Outlook 2010, and the rest use Mac with Entourage. When our standard signature line is included in an email that goes to Entourage, the result does not display correctly. It appears that Entourage is mangling the HTML. My working theory is that Entourage encounters inline CSS styles it does not know about and stops processing styles, but I'm really not sure. Question: How can I enter a signature into Outlook 2010 that will render correctly in Entourage? For example, can I specify somehow the exact HTML to use? Here's an example of how the HTML is being changed. Original on Outlook, as received by another Outlook client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#1785C5'>My Company<br> </span></b><span class=apple-style-span><span style='font-size:9.0pt; font-family:"Century Gothic","sans-serif";color:#666666'>123 Main St.</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#AFAFAF'>&nbsp;</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif";color:#666666'>Suite 100</span></span> Note the use of spans, color #1785C5 and color #666666. Same original email, as displayed in an Entourage client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; mso-fareast-font-family:"Times New Roman"'><br> <span style='color:#656565'>My Company<br> 123 Main St Suite 100<br> </span> Note the use of br tags rather than spans, and the color #656565.

    Read the article

  • Can somebody please recommend a good local file backup utility that will be Windows & OSX Compatible?

    - by JAG2007
    I have an external hard drive that I keep all of my work files on and transfer them back and forth between my Windows 7 box at work, and my Mac at home (I work from home frequently). Can someone recommend a really good backup utility that I can use on that external drive, to back the files up to my work computer locally, or the other external drive on my machine at work? I'm looking for preferably a free or open source software, and I'd prefer it to be cross system compatible, although I would also consider using a software that will only work on the Windows box. Also, I will consider a software that has a price assuming it is a really good piece of software and the price is reasonable (like under $50 or so). I checked out CrashPlan a bit, but not sure if that's gonna be really what I'm looking for. To reiterate I'm not looking for online backup solutions, just a piece of software that can back up my data to another drive locally. CrashPlan Free seems to offer this, but not sure how good it is (considering their goal is to get me to buy a pay for version). *NOTE: I'm running Windows 7 in 64bit so I need a piece of software that will be compatible with 64bit OS. My previous software, PC Backup, is not. That's partly why I'm in this boat.

    Read the article

  • How to re-add RAID-10 dropped drive?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • How to know disk quota on network share in windows?

    - by myforwik
    I connect to a share on a windows server and I have a quota of unknown size. All the tools I have seen are reporting the disk size/disk free space, not the quota size for myself. The only way I can figure out my quotas is to keep writing junk until I reach the quota. There must be a better way than this? My PC is windows XP and the servers are mainly 2003server.

    Read the article

  • Can I list file names (or their parent directories) that were recently deleted using rm in OS X?

    - by Andrew Grimm
    Is it possible to find out which files and directories have recently been deleted by rm in OS X? Or failing that, is it possible to find which parent directories have had files or directories within it deleted? The OS version is Snow Leopard. Background: Last night, rvm (ruby version manager) did rm -rf of the ~/ruby directory from the home directory. (This bug has since been fixed) Ideally, I'd like to know what files within the ~/ruby directory were deleted, but failing that, I'd like to know if rvm deleted anything outside of ~/ruby . In case anyone's wondering about backups...: Just about everything within ~/ruby is a git project that has a remote repo, and I have a fairly recent Time Machine backup (only 20 days old).

    Read the article

< Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >