Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 205/361 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • What are some good methods to improve personal password management?

    - by danilo
    I want to improve my personal password management. I usually use secure passwords, but overuse them for too many different places. My questions: What methods do you use to create passwords, e.g. for different online sites/logins? What methods do you use to remember those passwords? Memory? Pen&Paper? Software storage? Is there some good way to store my passwords somewhere, so I can always have access to them when I need them (e.g. a webbased solution on my own server) but at the same way keep them away from unwanted access? Edit: Someone on another site mentioned http://passwordmaker.org/. Have you had any good or bad experiences with that software?

    Read the article

  • Why does DBAN crash on my HDDs?

    - by John Watson
    I am using DBAN to erase HDD. DBAN is loaded from a CD and BIOS Boot order has been set to favour CD drive. On starting laptop, system boots from CD and DBAN interface can be seen. DBAN detects two storage devices, HDD and the SD Card. My HDD IS 320GB but DBAN says 298GB. It erases the SD card but when i try to erase HDD, it gives following error. DBAN finished with non-fatal errors. *ERROR /dev/sdb (process crash) *ERROR /dev/sda (process crash)

    Read the article

  • use network drives as mount points during installation?

    - by ajsie
    is it possible to use network storage locations as mount points during installation? cause i want to separate system (ubuntu) with data (personal files). eg. if i have 5 computers i don't want to recreate /home/david 5 times. so i want to mount networkdrive/home to /home in local ubuntu server. so ALL users home folders could be used and maybe also networkdrive/projects to /projects. in that way its ok if i by accident repartitioned the local ubuntu server cause all data is not there on that server, but in the data server. is separating "data" from "logic" good in this case? and is it possible? what protocol should i use for the mapping over internet? (maybe the server is in Sweden, and the data is in Norway). thanks.

    Read the article

  • Linux compilers for C/C++ on AMD "Bulldozer" CPUs like the Interlagos [closed]

    - by jstarek
    I am looking for a Linux compiler for C/C++ code that supports AMDs new "Bulldozer" architecture and produces efficient binaries for the Interlagos series Opterons. This seems to be a bit difficult because of the peculiarities of the Bulldozer microarchitecture. While AMD has a whitepaper with some details, I would like to see some independent analyses. The relevant paper from HeCToR focuses mostly on job placement and scheduling, which is an area we already investigate. So, who can recommend a good compiler comparison for Bulldozers running Linux? Does anyone have well-described benchmarks?

    Read the article

  • SQL 2008 R2: Data\Log partions

    - by Reese Hirth
    I have a SQL Server setup that a previous IT person set up with a 2TB data partition and a 1TB log partition. The OS partition is 244GB and SQL is installed on a separate 1TB partition. We have an additional 8TB of storage that I would like the new IT staff to bring on line. He wants to create 4 new 2TB data partition. I see this as confusing. Can't we just backup the current data partition, blow it away, and create a new 10TB data partition I'm responsible for administering the data on the server but am not allowed to do the setup myself. This is a GIS server running ArcGIS Server with around 60 geodatabases ranging from 20BG to a couple that may grow to over a TB. So, 5-2TB data partitions or 1-10TB partition. Thanks for the advice.

    Read the article

  • File creation time on Windows vs Linux

    - by Sergei
    We have following setup: mountserver - debian linux fileserver1 - Windows 2008 R2 Storage server fileserver2 - Celerra NS20 exporting CIFS share workstation - windows 7 with mapped drive to share on fileserver2 What we are doing: mounted share from fileserver1 on mountserver, e.g. /shared/fileserver1 mounted share from fileserver2 on mountserver, e.g. /shared/fileserver2 ran rsync on mountserver to sync data from fileserver1 to fileserver2.Used atime as parameter to sync data not older than X after a while tried to delete data older that Y on /shared/fileserver2. From what I see, linux stat command on mountserver returns following when quering file on /shared/fileserver2: At the same time when I open property for the same file using mapped drive connected to fileserver2,I see following for the same file: As you can see, Created date of 12 August shown in Windows Explorer is nowhere to be seen using stat command Am I missing something here?

    Read the article

  • Preferred OS for hosting Tomcat servlet container

    - by dacracot
    I know that I'm taking a risk, pitting the differing OS religions against each other, but I would like professional opinions about hosting a servlet container. In my case the container is set, we will be using Tomcat. But what is in question is the hosting operating system. We have administrators experienced in Windows Server 2003. We have developers experienced in Solaris, OSX, and Linux. There is no warring between these groups, just a question of who will ramp up through the learning curve necessary to use the OS that they are unfamiliar with. So given all the cooperative spirit, we are struggling with how to find the most efficient path. I had already cross-posted this question here.

    Read the article

  • Routing traffic to specific web sites through Ethernet, rest via wifi on Mac OS X 10.6?

    - by user32448
    Hi I have two separate Internet connections connected to a Mac and I'd like one of them (via Ethernet eth0 gateway 192.168.2.1) to serve for just backing up to an remote online storage, and the other one (via Airport en1 gateway 192.168.1.1) for all other Internet traffic. I tried using "route" from the terminal as follows: sudo route add -host 98.207.226.113 -interface eth0 (just for testing against the site www.whatismyip.org whose IP is 98.207.226.113, to see through which gateway the traffic is routed) I can see using netstat that the route is added: $ netstat -rn -f inet Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.1.1 UGSc 49 0 en1 98.207.226.113 192.168.2.1 UGSc 0 0 eth0 However, the traffic in this case does NOT get routed properly through Ethernet, as if the routing definition I made is ignored. Any ideas? Thanks!

    Read the article

  • Can someone correct me about Windows 2008 DFS implementation

    - by cwheeler33
    I have 2 two Windows 2003 file servers using DoubleTake at the moment. They are in an 2003 AD domain. And each server has it's own disk set. It is time to replace the servers... I want to use Windows 2008 64bit Ent. I was thinking of using DFS-R to replace DoubleTake. The part I'm not sure about is clustering. Do I need to have shared storage if each server has a copy of the data? I want to have the data available to the same share name, so maybe I don't fully understand how DFS is set up. I currently have about 6TB of data. I expect to grow by 3TB a year on these file servers. Any resources/books that could teach me would be good to know as well.

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • File Upload drops with no reason

    - by sufoid
    Hallo I want to make an file upload. The script should take the image, resize it and upload it. But it seems that there is any unknown to me error in the upload. Here the code define ("MAX_SIZE","2000"); // maximum size for uploaded images define ("WIDTH","107"); // width of thumbnail define ("HEIGHT","107"); // alternative height of thumbnail (portrait 107x80) define ("WIDTH2","600"); // width of (compressed) photo define ("HEIGHT2","600"); // alternative height of (compressed) photo (portrait 600x450) if (isset($_POST['Submit'])) { // iterate thorugh all upload fields foreach ($_FILES as $key => $value) { //read name of user-file $image = $_FILES[$key]['name']; // if it is not empty if ($image) { $filename = stripslashes($_FILES[$key]['name']); // get original name of file from clients machine $extension = getExtension($filename); // get extension of file in lower case format $extension = strtolower($extension); // if extension not known, output error // otherwise continue if (($extension != "jpg") && ($extension != "jpeg") && ($extension != "png") && ($extension != "gif")) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Unbekannter Dateityp: Es können nur Dateien vom Typ .gif, .jpg oder .png hochgeladen werden.</div>'; } else { // get size of image in bytes // $_FILES[\'image\'][\'tmp_name\'] >> temporary filename of file in which the uploaded file was stored on server $size = getimagesize($_FILES[$key]['tmp_name']); $sizekb = filesize($_FILES[$key]['tmp_name']); // if image size exceeds defined maximum size, output error // otherwise continue if ($sizekb > MAX_SIZE*1024) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden: die Dateigröße überschreitet das Limit von 2MB.</div>'; } else { $rand = md5(rand() * time()); // create random file name $image_name = $rand.'.'.$extension; // unique name (random number) // new name contains full path of storage location (images folder) $consname = "photos/".$image_name; // path to big image $consname2 = "photos/thumbs/".$image_name; // path to thumbnail $copied = copy($_FILES[$key]['tmp_name'], $consname); $copied = copy($_FILES[$key]['tmp_name'], $consname2); $sql="INSERT INTO photos (galery_id, photo, thumb) VALUES (". $id .", '$consname', '$consname2')" or die(mysql_error()); $query = mysql_query($sql) or die(mysql_error()); // if image hasnt been uploaded successfully, output error // otherwise continue if (!$copied) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden.</div>'; } else { $thumb_name = $consname2; // path for thumbnail for creation & storage // call to function: create thumbnail // parameters: image name, thumbnail name, specified width and height $thumb = make_thumb($consname,$thumb_name,WIDTH,HEIGHT); $thumb = make_thumb($consname,$consname,WIDTH2,HEIGHT2); } } } } } // current image could be uploaded successfully echo '<div class="success">'. $success .' Foto(s) erfolgreich hochgeladen!</div>'; showForm(); // call to function: create upload form }

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • Is it possible to create a delta VM?

    - by iTayb
    I have a VM of approx. 18GB. Sometimes I need temporally clones of it, so I clone it. The problems are: * It takes a while until it's done. * Sometimes I happen to need dozen of clones and I'm running out of storage. I wonder if there's a way to create a VM that saves only the delta (difference) since the delployment out of the source machine. That way each new VM's filesize should be 100MB at most, and creating it will be much faster. I've heard that VMWare View is using this concept. Is such a thing possible for ESXi as well? I'm using ESXi 4.1 with VSphere 4.1. Thanks!

    Read the article

  • my sweet old VHS collection

    - by microspino
    Which is the best procedure and digital format to resurrect my old VHS library i a way I can see It on my LCD TV? I have a not so big collection 100 VHS I have plenty of storage I have a network media tank (A110 popcorn Hour but I can also purchase a new media center if needed) I have an old working VCR (but again I can pick a specific one new if you think It's better to save quality) The VHS cassette collection seems to have retained a good quality over the years. Of course I have some computer (either mac and pc) to do the process. Which software do I need/miss? Please give me some advice.

    Read the article

  • Migrating an Active Directory domain controller to AWS

    - by Xavier Hutchinson
    I am required to migrate a Active Directory server into AWS with a couple other servers (SQL and IIS) to create a dev and test environment for our network / development. My plan at this time is to simply rebuild the Active Directory server in AWS from scratch - which is quite time consuming indeed! I was wondering if anyone had a recommendation as to a better and more efficient approach of migrating a copy of a physical Active Directory server to the cloud? The server is Windows Server 2012. Thank you!

    Read the article

  • FreeNAS - how to "Exclude from file" in Rsyncd (GUI)

    - by user179181
    I am trying to set rsync tasks to Pull user profiles from 11 Windows machines running DeltaCopy Server and then configure ZFS periodic snapshot tasks for a backup solution. So far this has been working fine, although i would like to exclude certain file types like .DAT or NTUSER.DAT. My Exclusion file resides on the local ZFS Dataset (Receiving side) and is as follows: Temp Temporary Internet Files NTUSER.DAT NTUSER.DAT.LOG *.dat *.tmp *.DAT.log *.ost *.pst The command i typed under Auxiliary Parameters (Rsyncd Global Conf under services)is as follows: exclude from = /mnt/Storage/User_Profiles/exclude.txt Ive tried deleting the .DAT files from the receiving end and just as i start to get excited i click refresh and there they are again

    Read the article

  • Too much free space on FreeNAS - ZFS

    - by Guillaume
    I have a FreeNAS server with 3 x 2 To disks in raidz1. I would expect to have about 4 To of space available. When I run zpool list I get: [root@freenas] ~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT main_volume 5.44T 3.95T 1.49T 72% ONLINE /mnt I was expecting a size of 4 To. Also, used space as reported by zpool list does not match what's reported by du: [root@freenas] ~# du -sh /mnt/main_volume/ 2.6T /mnt/main_volume/ There are quite a few things that I dont yet completely understand about ZFS. But at the moment I am mostly worried that I misconfigured my system and that I dont have any storage redundancy. How can I make sure I did not do an horrible mistake ... For the sake of completeness, here is the output of zpool status: [root@freenas] ~# zpool status pool: main_volume state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM main_volume ONLINE 0 0 0 raidz1 ONLINE 0 0 0 gptid/d8584e45-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d8f7df30-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d9877cc3-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 errors: No known data errors

    Read the article

  • How do you passthrough native SATA drives to a guest on ESXi?

    - by John
    I have ESXi 4.0 running on an Intel DX58S0 Mothboardboard with an Intel Core i7 930 processor. VT-d is also enabled. I have three drives in the system, drive 0 is used for ESXi. Drive 1 and 2 contain data from an older machine and show up under the "Storage Adapters" section in configuration. I would like to allow a guest machine to access the data on these drives (as nativly as possible). I have enabled passthrough of the motherboard's built in SATA controller (Intel/Marvell 88SE6121 ). This controller shows up in my guest OS, but the guest shows no drives aside from the normal virtual drive. I have tried a Linux guest and Windows7. I have also configured the host machine to try IDE/RAID/ACHI modes for the SATA controller. Any ideas how I can configure one of my guests to get at the raw data on these drives?

    Read the article

  • Symmetrix gatekeepers on Solaris 10

    - by Milner
    I have some Solaris machines that are connected to EMC Symmetrix for SAN storage. Apparently the Symm has a gatekeeper device that is used with the symmetrix CLI. We don't need the CLI, but I have these gatekeeper devices that constantly fill /var/adm/messages and the like with corrupt label errors. Is there anything I can do (short of deleting the devices on machine start) to get rid of them? Or should I just try to get our SAN guy to get the installer for the CLI? These things are getting annoying, and the devfsadmd daemon keeps rediscovering them on boot.

    Read the article

  • How to Mirror or Clone a Spanned Volume in Windows 2008

    - by Matt
    I have a spanned volume (3x6+ TB disks spanned to one 20+ TB volume) that I need to mirror or clone to a new 20+ TB (unspanned) volume. Once mirrored or cloned I'm going to destroy the original volume and reuse the storage elsewhere. Windows 2008 will not allow me to mirror it because the original is a spanned volume. I cannot simply copy the data, because there are sparse files on the volume. So the OS thinks there is 150+ TB used on the disk when there really is only around 18TB used physically. When I try to use the copy command it won't run because it thinks the destination volume needs to be 150+ TB to hold it all. A conundrum, but I figure someone here has the answer. Thanks, Matt

    Read the article

  • smallest footprint for Web Application server?

    - by edgardodelamanta
    There are times when you need to spare hardware resources (either to keep using legacy hardware, to play the embedded card, or just to be efficient because a large footprint is trashing CPU caches, leading to unacceptable levels of idle-states). In this spirit, some efforts have been made to make 'light' ports of Java or Mono (C# for Linux), and they range in the 80-50 MB (instead of the 100-200 MB). Add a Web server (Apache, IIS, etc.) to the scripting engine and you can happily dive into the GB (IIS + .Net) only to load the tool in memory. Anybody with more modest tools in the specs area?

    Read the article

  • Mounting root failed. Dropping into basic maintenance shell

    - by vmsystem
    Hi, I have purchased AMD Phenom X4 955 3.2GHZ processor with supporting gigabyte GA-MA785GM-US2H mother board / 6GB DDR2 RAM / 500GB SATA drive for learning Vmware ESX 3.5 product. In the above configuration, I have installed windows xp 64 bit operating systems and continue to installed vmware workstation 6.5. From the VM workstation, I can able to install ESX3.5 update2, but I unable to start properly, please refer the below mention error. “Mounting root failed. Dropping into basic maintenance shell. To collect logs for VMware, connect a USB storage device and run 'bin/vm-support '. Machine will be rebooted when you exit from this shell.” The same was tested in the windows 2003 Enterprise Edition server / windows 7 32bit / windows 7 64bit also, Please help me to resolve the issue.

    Read the article

  • Why is a single thread spread across CPU's?

    - by Marcus Lindblom
    I'm just curious why the scheduler constantly moves an app between CPUs, rather than keeping it on one. It looks a bit silly to have 4 cores at 25% rather than one at 100%. Does it has to do with heat, or is it more efficient somehow? Do other OS's do it differently? Insights or links to in-depth stuff would be nice. (Couldn't find much myself.) Update: By "spread out" I don't mean that it executes on several cpu's at once, but is being moved from one to the other several times per second, making the effect that it looks spread out.

    Read the article

  • SAS instead of SATA 2 for my hard drives?

    - by jasondavis
    I am building a new system soon, I will have multiple 1-2tb hard drives for storage in it. I only have experience uasing the sataII drives but I saw somewhere that I should be using something like SAS? I read that if I were going to have 20 drives that I could use 4 SAS cables vs 20 SATA cables. Can someone help me understand this better? If it were only 4 cables then how would 20 drives hook up? Also can a regualr sata2 drive hook up to that?

    Read the article

  • HDD situation - what would be best - data and backup

    - by Sam Johnson
    I just installed W8 on an Intel 330 180 GB SSD. I have 3 1TB HDDs. 1 HDD will be external for backup. 2 HDDs are then available for my PC. I do not need 2 TB of storage, so I thought I'd set these up to be exact clones of one another, so that if one dies I have a backup in the computer to go along with my external. Is this a good set up? How best would this be accomplished? I've heard people suggest RAID but I've never done RAID, have no idea what it is, and have no idea how to set it up in my BIOS. Thanks in advance

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >