Search Results

Search found 82005 results on 3281 pages for 'cost based data structure'.

Page 1488/3281 | < Previous Page | 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495  | Next Page >

  • 2 HDs in RAID 1 with 2 partitions ?

    - by Prix
    Hi, i am not very familiar with raid partitons and am not even sure if this is the right place to ask about it, but i hope that if it is not that some one can point me on the right direction. This is my situation, i have 2 500 GB hds and a 3ware pci-e hardware for raid and i wanted to make a RAID 1 but i dont know if i can make more then one partition for it, for instance: MAIN HD: os partition: 100GB data partition: rest of left size and make the RAID 1 either work on all the HD or just on the data partition of it. that is on windows xp sp3 and the 3ware allows bootable raid.

    Read the article

  • smartOS HPC config suggestion

    - by Andrew B.
    I'm configuring a brand new HPC server and am interested in using SmartOS because of it's virtualization control and zfs features. Does this configuration make sense for a SmartOS HPC, or would you recommend an alternative? System Specs: 2x 8-core xeon 384 GB RAM 30 TB HDs with 2x512GB SSDs Uses: - zfs for serving data to different vms, and over the network; 1 SSD for L2ARC and 1 for ZIL - typically 1-2 ubuntu instances running R and custom C/C++ code My biggest concerns as a newbie to SmartOS and ZFS are: (1) will I get near-metal performance from ubuntu running on SmartOS if it is the only active vm? (2) how do I serve data from the global zfs pool to the containers and other network devices?

    Read the article

  • Why sizes are different, and what do they mean?

    - by Ramy
    I have a 1 TB hard drive that consists of one NTFS partition which I use to back up my data (no operating system). The size of all the data in it is : 726 GB, size on disk: 728 GB, and the used space when I check the properties is: 731 GB. There's a 5 GB difference between the size and the used space. Why is that huge difference there? What's the difference between these sizes? (size, size on disk, and used space) Is there a way to calculate the difference, and be sure the HDD is not messing around? Is that normal?

    Read the article

  • Accessing a storage-side snapshot of a cluster-shared volume

    - by syneticon-dj
    From time to time I am in the situation where I need to get data back from storage-side snapshots of cluster shared volumes. I suppose I just never figured out a way to do it right, so I always needed to: expose the shadow copy as a separate LUN offline the original CSV in the cluster un-expose the LUN carrying the original CSV make sure my cluster nodes have detected the new LUN and no longer list the original one add the volume to the list of cluster volumes, promote it to be a CSV copy off the data I need undo steps 5. - 1. to revert to the original configuration This is quite tedious and requires downtime for the original volume. Is there a better way to do this without involving a separate host outside of the cluster?

    Read the article

  • nginx caching per user agent

    - by Tuinslak
    I'm currently using nginx as reverse proxy with caching enabled. However, the main site has two different layouts, depending on the user-agent (mobile or not). I've tried something similar to this: # mobile users if ($http_user_agent ~* '(iPhone|iPod|mobile|Android|2.0\ MMP|240x320|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|hiptop|IEMobile)') { set $iphone_request '1'; } if ($iphone_request = '1') { proxy_cache mobile; } if ($iphone_request = '') { proxy_cache site; } proxy_cache_key "$scheme://$host$request_uri"; proxy_pass http://real-site.tld; However, nginx gives an error, stating proxy_cache can't be used in an if-structure. Any other way to serve from a different cache depending on the browser? Thanks, Tuinslak

    Read the article

  • Activating Sharepoint Server Publishing Infrastructure returns "Column Limit Exceeded"

    - by Haydar
    I am getting the following error when attempting to activate the Sharepoint Server Publishing Infrastructure. There are only about 140 columns defined in the entire site collection. I had previously activated and subsequently deactivated this feature. I don't know whether this might be causing the problem. But I really have no idea. I am a developer and this is my first question on serverFault. Please be gentle. Error Message: Column Limit Exceeded. There are too many columns of the specified data type. Please delete some other columns first. Note that some column types like numbers and currency use the same data type. Troubleshoot issues with Microsoft SharePoint Foundation. Correlation ID: 2ec9ae3a-5464-4922-9760-874099f51420 Date and Time: 11/8/2010 3:43:44 PMDate and Time: 11/8/2010 3:43:44 PM

    Read the article

  • Existing tables with binaries to use filestream

    - by user1098487
    I've got a few tables for which I want to use filestream storage. These tables already contain binary data and have rowguids. However at the time they were were created, the tables were not added to a filestream enabled filegroup. What is the best way to have these tables use filestream at this point? Do I need to drop + recreate the tables and migrate the data? Is there an easier way? The database already has filestream enabled and there are other tables which are using them.

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • Using robocopy and excluding multiple directories

    - by GorrillaMcD
    I'm trying to copy some directories from a server before I restore from backup (my latest backup was corrupt, so I have to use an older one :( ). I'm in the Windows Recovery Environment and have access to the server's file system G:\ and my backup media C:\. But, since I'm more familiar with Linux, I'm having a bit of trouble with the command line in Windows, specifically robocopy. I want to copy multiple directories (maintaining the same directory structure) from G:\ to C:\ while excluding others (namely, the Windows and Program Files folders). I can't figure out the syntax for the /XD option. I was hoping to do something like: robocopy G: C:\backup /CREATE /XD "dir1","dir2", ...

    Read the article

  • Can't Redirect the root of my Domain

    - by JRameau
    My Issue: I can access:http://exampledomain.com/any/thing/I/want/2type But I can Not access:http://exampledomain.com or http://www.exampledomain.com -it gets redirected to the Default vhost, which is a generic construction page. I run on a plesk set-up: and originally "exampledomain" was its own Plesk Vhost domain. I run everything in drupal, so I want to just consolidate it onto a multisite, usually this is as simple as setting the correct folder structure in drupal, and just simply making the new domain an Alias of the bigger Vhost. I checked /etc/httpd/conf.d/zz010_psa_httpd.conf to see if there were any remnants of the old settings. Any suggestions, thanks in advance?

    Read the article

  • List existing file server permission groups/users

    - by Patrick
    So we have taken over a new client and their existing file server is frankly a mess. We have migrated their old file server from a 2k box to a 2k8 DFS cluster and now I'm looking at rebuilding both the folder structure and their permissions. Unfortunately its been half done with AD groups (poorly named/no description/notes) and half with individuals named in security on the folders themselves. What I'm looking to do is to dump a complete list of all the folders with their security permissions (ideally I'd like to ignore files but not essential). CACLS got me half way there but fails with an odd error message and its output isn't particularly user friendly and I'm working with roughly 2Tb/250,000 files here so I really need something that gives me a bit more functionality. Question : does anyone have any experience of something similar/know of a bit of software that might help me out?

    Read the article

  • Crap, hard disk failure. Can I recover my "move"d folders?

    - by Doug
    I am in the process of moving all my files from an old laptop to new one. I just moved 11gb of data from my old laptop to a hard drive (external) and then upon moving it out to the new hard drive, the hard drive is getting a CRC (Data Error (Cyclic Redundancy Check). Now I am looking for a solution to recover the files that I moved on my old laptop (not the external). I understand they they are just marked for potential overwriting to free up space. I was getting ready to test out GetDataBack, but it says to install it on a healthy windows and use the recover-needed drive as an external. However, I don't want to turn off my computer without first getting the okay since it is in a "moved" state. Please help! What can I do to recover the Moved files. I haven't touched the computer since it has been moved. What can I use to recover them?

    Read the article

  • RAID 10, how layout works ?

    - by Bastien974
    I'm trying to figure out how exactly works the RAID 10 in linux with mdadm. I want to create a RAID 10 out of 4 partitions, let's say a, b, c and d. a and b are on the array 1, c and d array 2. So what I want is to have the couple a and b, c and d in RAID 0. Then on top of that, a RAID 1. The option in the mdadm command to configure the layout is -p, --layout with option : near, far, offset see here I want to keep my data safe if the array 1 fails for example, that would mean that every chunk of data are always copied on both arrays. How do I have to set my RAID 10, near or far ?

    Read the article

  • How do you get more space on Live Mesh/Live Sync? [closed]

    - by Narcolapser
    I'm looking into corporate data backs for my company's dealers. each dealership will have data back up demands ranging from 100mb to 20gb. We are an entirely Microsoft solution so when I was asked to look into back ups, of course I would look to Micro$oft. even if we have too buy this space, is there a way to get more space on Live Mesh/Live Sync (Live Mync hehe)? The 5 gb that Mesh provides or the 2gb that Sync provides isn't enough for our larger dealerships. The 25gb that SkyDrive provides is probably enough for now, but I don't know if it will be in the future. However, SkyDrive is not automatically synced. So it isn't a viable option anyway.

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Is rsync --delete safe in case of disk failure

    - by enedene
    I have two data hard drives on my Linux server and I use second as a backup for a first drive. I use rsync for that purpose. An example would be: rsync -r -v --delete /media/disk1/ /media/disk2/ What this does is that it copies every file/directory from /media/disk1/ to /media/disk2/ but also deletes any difference. For example, lets say that files A and B but not file C are on disk1, and on disk2 there is no A and B files, but there is C. The result would be that after the command on disk2 I'd have files A and B, but file C would be deleted, just like on disk1. Now, a rather disastrous scenario had crossed my mind; what if disk1 dies, system continues to work since system files are on my system disk, but when rsync tries to backup my data on disk2 from broken disk1, it deletes all the files from disk2 because it can't read anything on disk1. Is this a possible scenario, or is there a protection from it build in rsync?

    Read the article

  • Reliability of VMware ESXi for backup

    - by Laurent
    Currently, I'm using a server as an online backup and to run some VMs with VMware Server. I'm interested in converting it to VMware ESXi but have some concerns about the possible corruption of my VMDKs if I choose to store my data on them. I was also thinking of storing the data directly on the datastore but can't find any way to mount a VMFS volume with a LiveCD if ESXi is unable to start. What are my options? Is continuing to use VMware Server is a good idea, knowing that I DO want to use the server for both virtualization and backup purposes. Thanks.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Examples of applications where messages are pushed from a server and display on clients immediately

    - by James Hay
    I'm trying to find an example to show where data is sent from a server and is pushed to a/multiple clients which are updated immediately, i.e. the client doesn't make requests for updates. It doesn't matter whether we're talking mobile, desktop or whatever. An even better example would be where there were multiple recipients for the same message. It doesn't matter what the data is or the context it's used in, only the immediacy of receiving it. I was thinking that there might be some example in finance and the stock markets, but I haven't been able to find any through googling. IM clients are a great example of this and are on my list of one ;) If anyone works on applications of this nature or knows of particular implementations, can you give me a quick run down of the use case and if it's commercial software the name of the software. This is all basically for research purposes so doesn't have to be particularly detailed. If anyone can help, thanks.

    Read the article

  • MapReduce job is hung after 1 of 5 reducers completed on single-node environment

    - by Marboni
    I have only one Data Node on my dev environment on EC2. I ran heavy MR job and in 6 hours noticed that 100% of mappers and 20% of reducers finished (1 of reducer shows 100% competition, other ones - 0%). Looks like job is hung between 2 reducer runs. I don't see any errors in log files. What it can be? P.S. Last logs of successfully finished reducer: 2012-11-09 11:29:21,576 INFO org.apache.hadoop.mapred.Task: Task:attempt_201211090523_0004_r_000000_0 is done. And is in the process of commiting 2012-11-09 11:29:22,692 INFO org.apache.hadoop.mapred.Task: Task attempt_201211090523_0004_r_000000_0 is allowed to commit now 2012-11-09 11:29:22,719 INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_201211090523_0004_r_000000_0' to /data/output/1352457275873/20121109-053433-common 2012-11-09 11:29:22,721 INFO org.apache.hadoop.mapred.Task: Task 'attempt_201211090523_0004_r_000000_0' done. 2012-11-09 11:29:22,725 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1

    Read the article

  • How to migrate Lotus Notes Mail in database to Exchange public folder?

    - by elsni
    I need to migrate a Lotus Notes Mail database to Exchange. In the past, Users get mails in their Notes mail accounts and sort them manually by drag and drop in a specific folder structure in a seperate Notes mail database. This should be mapped to Exchange. I considered using outlook and a sharepoint Library mapped to outlook, but Outlook does not support drag and drop from the mail account to a standard doclib and the discussion libs do not support folders. So I think it's the easiest way to use an Exchange public folder instead of a sharepoint lib, which should work as in Notes (correct me if I'm wrong). But how I do migrate the old Notes db to the public folder including all subfolders? Thank you!

    Read the article

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • Are there any tools for monitoring individual Apache virtual hosts in real-time?

    - by Dave Forgac
    I'm looking for a way to monitor and record Apache traffic, separated by virtual host. I am currently using Munin to capture this and other data for the entire server however I can't seem to find a way to do this by vhost. This link describes using a module called mod_watch which is apparently no longer in development: http://www.freshnet.org/wordpress/2007/03/08/monitoring-apaches-virtualhost-with-munin/ The file that is listed as being compatible with Apache 2.x is reported to have problems with missing vhosts an reporting data correctly. Does anyone know of a reliable way to determine real-time traffic per vhost? If I can find this it should be easy enough to write a new Munin plugin.

    Read the article

  • The best way to make full system dump on Centos [duplicate]

    - by tester3
    This question already has an answer here: Centos 5 Full backup 1 answer I am on Centos 6.5 with a lot of soft and services installed and working. Also I've got a lot of configs which damaged my brain and I dont want to do it again:) So, can anyone please advice the best way to make a full system dump with all data, so I need only to copy-paste them to new system to get my system ready on the other machine. Or something like that? P.S. Data on my hdd is encrypted, and I'd liked and encrypted dump too. Please help:)

    Read the article

  • MS-Outlook add-on to move a new message to the same folder as the rest of the thread

    - by Guss
    I'm forced to use MS-Outlook in my job, while I very much like the feature that shows all the messages of a discussion thread (that are stored in different folders) in the inbox when a new message is received for that thread, if the previous messages are in a different data file (which I'm forced to have as the MS-Exchange server quota is very very small) then the message list only shows the name of the data file and not the name of the folder where the messages are stored. As a result, because I file my message by context (i.e. all the emails for project A go into a "Project A" folder, etc), and its important for me to have all the messages in a single thread in the same folder, it is sometimes hard to figure out into which folder should I file the new message. It would be great help if there was some add-on or VBA script that I could add to my setup that will offer a shortcut key or a button to "file this message to the same folder as the previous messages in the conversation thread".

    Read the article

< Previous Page | 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495  | Next Page >