Search Results

Search found 14047 results on 562 pages for 'ctrl alt delete'.

Page 468/562 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Ubuntu 8.04 wont reboot from script

    - by Littlejon
    I have a script that is run to backup a server via Rsync, after that script is run I want the server to reboot. My script is run as root from the Crontab at 3am in the morning. #!/bin/bash HOST="email" RSYNC_OPTS="-a -v -v --progress --stats --delete" RSYNC_DEST="10.0.0.10::$HOST" BACKUP_LIST="/etc /home /root" TIMESTAMP="/timestamp-bkup-start.chk" TIMESTAMP2="/timestamp-bkup-stop.chk" touch $TIMESTAMP rsync $RSYNC_OPTS $TIMESTAMP $RSYNC_DEST for BACKUP_ITEM in $BACKUP_LIST; do rsync $RSYNC_OPTS $BACKUP_ITEM $RSYNC_DEST done /etc/init.d/zimbra stop sleep 60s rsync $RSYNC_OPTS /opt $RSYNC_DEST touch $TIMESTAMP2 rsync $RSYNC_OPTS $TIMESTAMP2 $RSYNC_DEST echo `date +%Y%m%d%H%M` >> /var/log/reset reboot # $# shows number of args passed # $1 to access first variable #if [ $# -eq 1 ]; then # if [ $1 = "withreboot" ]; then # echo "rebooting..."; # echo `date +%Y%m%d%H%M` >> /var/log/reset # /sbin/reboot # fi #fi I have tried using init 6 rather then reboot. I have tried /sbin/reboot. I also have another basic script that just echos to the reset log and runs reboot without issue. It is just with the script above the server wont restart. If anyone has any theories that would be great as I have run out of idea. Thanks, Jon

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • backup an existing linux server to a virtualbox virtual machine

    - by user146526
    I have some servers and VPSs to many companies across the world. I want to back them up locally. I have some backup solutions enabled to remote hosts, but I want to have a local backup on a computer at home. What I am thinking is: 1) Create a virtualbox virtual machine, install the same version linux as the server. 2) Use rsync to backup the server to the local virtualbox machine. (something like rsync -av --delete --progress --exclude '/dev/' --exclude '/proc/' root@server_ip:// / ) 3) Repeat the command every few days update files. 4) In case of a hard disk failure, or any other bad event, reverse the rsync command and get the files back and continue my bussiness. I tried it with 2 openvz VPS, the one was a backup of the other. I also tried to transfer normal linux server host to openvz machine and it worked great. That way looks pretty clean and easy to me, this is the kind of solution I am looking for. However I need to be sure that this will work if I am going to do it. The question is, will that work ok ? Does anyone see any problem with that ? Do you have any other suggestions ? Thanks

    Read the article

  • Matching up HomeGroup users created in different versions of Windows

    - by pdr
    In our household, we have 2 laptops and a desktop, all currently on Windows 8.1. The desktop has been upgraded in stages from Windows 7. Laptop 1 was bought with Windows 8 and has now been upgraded. Laptop 2 is new and came with Windows 8.1. We've been trying to set up a HomeGroup such that we have one user on laptop 1, one on laptop 2 and both can log into the desktop, with each user able to edit their files on their laptop and desktop, while the other user has read-only access. And we now have that situation ... except ... because the user on laptop 1 was created in Windows 8 but the same user on the desktop was created in 8.1, they have different names in Windows Explorer (say, firstuser and first_000). Likewise, the other use was created on laptop 2 in Windows 8.1 and on the desktop in Windows 7 (so let's say secon_000 and seconduser). This is ultimately confusing. Now if we expand the HomeGroup, we get four users (firstuser, first_000, seconduser and secon_000) and each has a single computer inside it. What I'd like to see is firstuser = laptop1, desktop and seconduser = laptop2, desktop. An acceptable alternative is first_000 = laptop1, desktop and secon_000 = laptop2, desktop. But what I don't want to have to do is delete firstuser and seconduser and rebuild them. Is there a better way?

    Read the article

  • Cannot access new folders created in my Apache2 document-root

    - by user235101
    I have tried to create a new folder 'test' in my documentroot of my Apache2 installation, however, whenever I try and access it from a web-browser it gives me a 403 (forbidden) error. My virtualhosts file - <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName REMOVED DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride All AuthType Digest AuthName "documentroot" AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require user REMOVED AllowOverride Indexes </Directory> <Directory /var/www/> Options FollowSymLinks Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/share/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/REMOVED/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/stream/> Order Deny,Allow Allow from all Satisfy any </Directory> <Directory /var/www/test> Order Deny,Allow Allow from all Satisfy any </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined <Directory /var/www/REMOVED> AuthType Digest AuthName "rutorrent" AuthDigestDomain /var/www/REMOVED/ http://46.105.127.19/REMOVED AuthDigestProvider file AuthUserFile /etc/apache2/htpasswd Require valid-user SetEnv R_ENV "/var/www/REMOVED" AllowOverride Indexes </Directory> </VirtualHost> Image of the permissions - Other information - If I create a new folder (and use chmod --reference to ensure it has the same permissions as an accessible folder), I get a 403 client-side. If I copy folder 'rapidleech' to the name 'rapidleech1', it will let access 'rapidleech1', but no longer 'rapidleech', until I delete the copy. In my logs I found nothing logged in errors.log, and only that it delivered a 403 in access.log. All the appropriate users are members of www-data.

    Read the article

  • Why is only one Excel spreadsheet crippled, but others are fine?

    - by Dallas
    I have an inherited spreadsheet that I really don't want to rebuild at the moment. It's a simple small workbook that is small (< 200 rows that don't even reach to AA) and does nothing more than calculate some totals within the same worksheets. No macros, no external data sources, nothing beyond basic formatting of dates, numbers and strings. I see importing data from CSV/text has created many many workbook connections over time, but even if I delete them all (there were hundreds) it makes no difference in performance. Even clicking to simply change focus from cell to cell takes 10+ seconds, adorned by the spinning cursor and (Not Responding) appending to the title bar and the application locking up. The program seems to "recover" every time, but efficiency of editing this file is obviously seriously handicapped. All other files seem fine in Excel, and other programs have no apparent performance issues. I see Excel is chewing up CPU but I'm not sure how to narrow down what process or service is "clashing" with Excel. I tried the same file on other computers and performance is fine. If I turn off all start-up services and run only Excel, performance is restored... until I start using other programs and then it bogs down again. At this point, I would entertain almost any idea, theory or suggestion that helps pinpoint, solve or work around the issue.

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • ASUS laptop doesn't charge/use the battery after reinstalling Windows 7

    - by Stan
    I've done a clean install of Windows 7 x64 on an ASUS X501A laptop. The battery is detected and shows in the system tray as "plugged in, charging". However the charge level stays at 76% and if the AC cord is plugged out the laptop turns off. The laptop does not turn on without being plugged in either. Everything worked perfectly prior to reinstall. I've tried: Downloading and installing all the ASUS drivers, including the ATK ACPI driver Checking the BIOS - there do not seem to be any battery-related settings Flashing the BIOS to the latest version Uninstalling Microsoft ACPI-Compliant Control Method Battery in device manager as suggested on the internet Full power discharge/ATX reset as suggested by ASUS support: remove mains power charger, remove battery, press and hold power button for 10 seconds, reconnect battery and mains and turn on I have a feeling all this may have something to do with the EFI BIOS that comes on the laptop. During the reinstall I had to delete all partitions and start anew, because the Windows installer complained about the improper order of GPT partitions. The EFI System Partition was recreated by the installer, and I am guessing that it may be missing the particular ACPI driver needed to make the battery work. I've tried researching this, but could not come up with any useful info. I am hoping someone here may know a bit more about this and maybe help me understand what's going on and how to fix it. Barring that, I'll have to re-image the drive off an identical ASUS laptop with stock install and hope it fixes things.

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

  • Computer hangs at BIOS screen. Cannot enter setup.

    - by d2jxp
    I have an HP Pavilion a6500f (it's a year out of warranty) and it's hanging on the blue HP BIOS screen. If I mash F10 while it's starting up, it will say "Entering Setup..." but I will see no results. It will hang there and not do anything. If I actually wait until I can see the screen and then hit F10, there's no response at all and the computer will sit at the BIOS menu. I've dusted and cleaned it out, reseated the memory, switched the RAM slots, and reset the CMOS battery using the reset jumper. I'm out of ideas. I'm pretty sure it's not a hard drive issue, since my problem is at the BIOS. After this post, I'll disconnect the hard drive and try to just boot without it. Anyone have any other ideas? Edit: Okay, so I tried disconnecting the hard drive and now I can get back into the BIOS. I reconnected it and I'm locked out again. So the problem is my hard drive.. I guess I should delete this post unless someone has any ideas as to what's wrong with the drive?

    Read the article

  • Backing up a Windos 7 partition from Macbook with no OS X

    - by mattcodes
    I have a 3 year macbook with Windows 7 installed as 40gb and OS X as 40gb (80gb HD). I want to remove OS X as Im at the limit of 40gb on Windows and I have not logged on to Mac OS X since installed Win7 (dont flame me). So I want to delete OS X partition and expand my win partition to 80gb BUT I still would like to be able to regularly (once a week/month) backup my Windows 7 partition - its took a while to setup everything up right - not just docs and programs - so when the hard drive dies I want to be able to restore the partition and boot away, (the daily volatile bits I can pull down from dropbox and project from soure control). With Mac OS X I could use Winclone - and this worked flawless last time the HD failed with XP but with the absence of OS X I will need something else. Im thinking can I use a Linux Live boot CD along with an external USB hard drive. Boot from CD and then dd? the partition to the USB? What linux distro live CD should I use? I say dd as if I know what am taking about (I dont) is this the best way to backup a partition (when it will be restored to same hardware as bootable) ? What command?

    Read the article

  • Troubles with apache and virtual hosts

    - by xZero
    I have a BIG problem. I have VPS with Debian OS, and LAMP installed. Fresh install. For control panel i using Webmin. Now i trying to setup multiple sub-domains on my server using webmin for example: downloads.my-domain.com cpanel.my-domains.com forum.my-domains.com But problem what is happening is next, while i using no virtual hosts, everything works perfectly when i accessing it using my-domain.com, but when i add Virtual host, i cann access it but my-domain.com becomes unavilable because it redirects to virtual hosts i added. When i add more than 2 virtual hosts, problem is still here. Also, when i try to access to virtual server for example downloads.my-domain.com, it redirects again to cpanel.my-domains.com When i delete virtual hosts, access to my-domain.com is succesfull... What i known: - That is not problem with my domain provider. I correctly added subdomains and added host record to my VPS IP. - I added unique name to every single virtual host. - There are no two same virtual hosts - Every virtaul hosts have own directory: for example: downloads.my-domain.com have own WWW dir: /var/downloads Can somebody help me? Thanks.

    Read the article

  • Can I configure Thunderbird 3 to refresh the folder list for an Exchange IMAP account?

    - by Howiecamp
    Background: When used as an IMAP client against Gmail, Thunderbird 3 (may be the case in v2 also, not sure) will refresh it's list of folders (the folders correspond to Gmail labels) when you do "Download/Sync Now..." or restart the Thunderbird client. Any new folders (labels) created in Gmail will sync to the client and any folders moved/changed/deleted folders in Gmail will move/change/delete on the client as well. (Note: Thunderbird has the concept of "subscribing" to IMAP folders (assumingly allowing you to determine which folders you want, rather than bringing all of them down and dragging loads of data across the wire). When used against Gmail, Thunderbird appears to automatically subscribe to all folders (including when folders are newly created in Gmail), so this might be why the refresh is happening properly.) This behavior is what I want with Exchange. When using Thunderbird with Exchange (2007), the folder list doesn't refresh when folders are added/changed/deleted on the server and/or from a different mail client. When I look at the subscription options, some are checked and some are not (not sure why Thunderbird picked some and not others). And when I add new folders on the server and/or from another client, they never even appear in Thunderbird's list of folders, preventing me from subscribing to them.

    Read the article

  • Possible attack on my SQL server?

    - by erizias
    Checking my SQL Server log I see several entries like this: Date: 08-11-2011 11:40:42 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:42 Source: Logon Message: Error: 18456. Severity: 14. State: 8. Date: 08-11-2011 11:40:41 Source: Logon Message: Login failed for user 'sa'. Reason: Password did not match for the login provided. [CLIENT: 56.60.156.50] Date: 08-11-2011 11:40:41 Source: Logon Message: Error: 18456. Severity: 14. State: 8. And so on.. Is this a possible attack on my SQL Server from the chineese???! I looked up the IP adress, at ip-lookup.net which stated it was chineese. And what to do? - Block the IP adress in the firewall? - Delete the user sa? And how do I protect my web server the best?! :) Thanks in advance!

    Read the article

  • Ubuntu 12.04 Very slow especially with Android Studio

    - by Nada
    I have an old laptop with the following specification: Memory: 485 MiB, Processor: Genuine intel CPU T2300 @ 1.66 GHz ×2, OS Type: 32 bit, Disk: 78.1 GB, I installed on it Ubuntu 12.04 LTS and I noticed that the overall system is very slow in responding. I tried to search about that in the internet and I found some articles talking about how to make Ubuntu 12.04 LTS run fast I applied all what they said including download LXDE desktop environment and then nothing different in the system response time. Then I need to develop some android applications so, I download Android Studio (Beta) 0.8.6. The problem became worse than before whenever I tried to open the Android Studio the screen is frozen for some minutes then it took time to download the projects and initialize the work space also, when I tried to move the cursor he is move very slowly. When I tried to run my first application on the AVD it took three hours and still not run yet. I delete the Android Studio and install it again several times, I was trying to solve the problem but still nothing change. Please if you have any suggestions that may help me make my laptop and Android Studio work faster I will appreciate it for you. Thank you in advance.

    Read the article

  • Unable to remove invalid(orphaned?) SPNs

    - by Brent
    tldr version: Renamed domain from internal.domain.com to domain.com, have 4 SPNs that am unable to remove from DC. So my domain was internal.domain-name.com and I renamed it to domain-name.com and I thought everything was good. Several days later, I start setting up my RD Gateway and am noticing issues surrounding group policy. I run dcdiag and the SystemLog part fails. Starting test: SystemLog A warning event occurred. EventID: 0x00001796 Time Generated: 08/25/2014 02:48:30 Event String: Microsoft Windows Server has detected that NTLM authentication is presently being used between clients and this server. This event occurs once per boot of the server on the first time a client uses NTLM with this server. An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:18 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:48 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:52:47 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: This made me check my AD for possible connections to the .internal domain. I found four which I remove by: setspn -D E3514235-4B06-11D1-AB04-00C04FC2DCD2/d79fa59c-74ad-4610-a5e6-b71866c7a157/internal.domain-name.com ServerName setspn -D HOST/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D GC/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D ldap/ServerName.domain-name.com/internal.domain-name.com ServerName Also, checking my dns records, there's an internal subdomain that I can delete but it comes back as well. I've tried removing the spns to no avail. Is there something I'm missing?

    Read the article

  • Why is XSP/mono giving me file/write errors with lucence?

    - by acidzombie24
    I am using mono 2.6.7, xsp 2.8.? and i am not sure what else. I'll try downgrading xsp and whatever else. Today i notice my server randomly gets Http Error 500, internal server error. I looked in my log files http://www.pastie.org/1426236 and it appears that the problem is locking lucence which it does by creating a file called write.lock. This use to work with no problem but now it gives me pain. It will fix itself if i refresh the page several times but it will break just as easily. My other site will not work everytime i restart apache and i need to by hand delete the file (at least it doesnt get random 500 errors). However now it just says Lock obtain timed out: NativeFSLock@/var/www/SITE_2/App_Data/LuceneIndex_a/write.lock no matter what. I even chmod -R 777 the directory and still no luck. write.lock doesnt even exist. I have no clue whats going on. Any ideas?

    Read the article

  • Is it possible to trace someone using Google during an online exam?

    - by George
    I happen to be a professor at a reputed college. I want to design an online exam for over 1000 students via around 50 computers right after the vacation ends. Now the problem is that I have heard that many students use Google on a different tab to find answers when no invigilator is around. I want to know if there is a way to backtrace it after the exams via some kind of history or any other possible way. In our university there is a standard system. I am not good with computers but I will try to explain. Each computer uses mozilla to connect to a server centrally located via an IP. The students open it and enter a unique ID and password to start the exams. Many questions are jumbled and different groups of students give exam in a different time slot. Is there any way to trace it since I want to set an example for students so they won't cheat and give exams in an honest way. Additional details: Since the number of computers are less than the number of students, more than 10 students are going to use a single computer on a single day over a period of 10 hours. After this, if I check the history (and let's say someone even forgot to delete the history and I see it), will I able to figure out who among the 10 has done it? Moreover, is it even practical and feasible?

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • De-duplicating backup tool on a block basis? [closed]

    - by SST
    I am looking for an (ideally free as in speech or beer) backup tool for Unix-like OS which can store deduplicated backups, i.e. only nonredundant content takes up additional space. I already looked at dirvish (my first candidate) and rsnapshot which use hardlinks to achieve deduplication on a per-file level. However, as I want to back up large files (Thunderbird mailboxes 3GB, VMware images 10GB), such file are stored again entirely even if just a few bytes change. Then there are rsync-based tools like rdiff-backup which only store deltas and a current mirror. However, as the deltas are generated against each previous mirror, it is difficult to fine-tune the retention granularity (only keep one backup after a week, etc.) because the deltas would have to be re-evaluated. Another approach is to partition content into blocks and store each block only if it is not stored yet, otherwise just linking it to the first occurrence. The only tool I know of that does this by now is obnam (http://liw.fi/obnam), and it even supports zlib-compression and gpg-encryption -- nice! But it is very slow, AFAICT. Does any one know any other, solid backup software which supports deduplication on a sub-file level, ideally with at least some management options (show/select/delete generations...)?

    Read the article

  • Dropbox picture sync: Skip RAW files?

    - by Steven Lu
    I like the convenience of having Dropbox keep track of my photos because it tends to work with my devices over 3G (I am often tethering to my phone with my iPad and Macbook) as well as Wifi, but it's a waste of network traffic to sync the raw files from my camera or memory card. It clutters up the dropbox list and the files are just huge. Is there a way to configure the Dropbox client so that it ignores a certain file extension for the picture sync? Also, I suspect that if I just go and delete the raw files, that the next time I plug in the memory card and tell Dropbox to sync, it will re-download the raw files. Which would be terribad. I could switch to iCloud for Photo Stream, I suppose, but there will be no access via 3G that way. And I've already got years of experience with Dropbox so I know it's going to just work. I think any method that works for filtering files to exclude from sync on Dropbox in general should work here too. Edit: Wow there are 19k votes for this exact request.

    Read the article

  • Disable all the idiot-checking in Mac OS X

    - by Fake Name
    I am a Windows/Linux user, who is learning Mac OS X out of interest in doing dev-work for the iPad which I recently purchased. However, OS X is driving me nuts by trying to protect all it's system files, hiding all of the important OS components I want to tweak, and generally making it impossible to do any modification to the OS in general to make it more usable. Therefore, is there a way to turn off all the idiot-checking in Finder? On XP, I can disable "Hide Protected Operating system files" and set "Show Hidden Files". On linux, there really aren't many hidden files, and changing the configuration for .files is easy enough in Gnome and XFCE. How can I set up OS X in a similar way. I am not new to computers, and I am fully aware that deleting system files can damage or even irreparably disable a OS install. Therefore, If I intentionally try to delete a file, or move something, it's probably intentional, and I am willing to accept the consequences in any case. At this point, I have fallen back to doing everything through the command line (which takes forever), because Finder is practically unusable. (As for what I am attempting to do, I also asked about GUI changes here.)

    Read the article

  • Kerberos issues after new server of same name joined to domain

    - by MentalBlock
    Environment: Windows Server 2012, 2 Domain Controllers, 1 domain. A server called Sharepoint1 was joined to the domain (running Sharepoint 2013 using NTLM). The fresh install for Sharepoint1 (OS and Sharepoint) is performed and set up for Kerberos and joined to the domain using the same name. Two SPNs added for HTTP/sharepoint1 and HTTP/sharepoint1.somedomain.net for account SPFarm. Active Directory shows a single, non-duplicate computer account with a create date of the first server and a modify date of the second server creation. A separate server also on the domain has the server added to All Servers in Server Manager. This server shows a local error in the events exactly like This from Technet (Kerberos error 4 - KRB_AP_ERR_MODIFIED). Question: Can someone help me understand if the problem is: The computer account is still the old account and causing a Kerberos ticket mismatch (granted some housekeeping in AD might have prevented this) (In my limited understanding of Kerberos and SPNs) that the SPFarm account used for the SPNs is somehow mismatched with HTTP calls made by the remote server management tools services in Windows Server 2012 Something completely different? I am leaning towards the first one, since I tested the same SPNs on another server and it didn't seem to cause the same issue. If this is the case, can it be easily and safely repaired? Is there a proper way to either reset the account or better yet, delete and re-add the account? Although it sounds simple enough with some powershell or clicking around in AD Users and Computers, I am uncertain what impact this might have on an existing server, particularly one running SharePoint. What is the safest and simplest way to proceed? Thanks!

    Read the article

  • Weird Outlook Behavior; Creating its own file folder

    - by Carol Caref
    Outlook is doing a very strange thing. It has created a folder on its own (which, whenever I completely delete, comes back, with a different name). Mail that goes into this folder will not go to any other folder unless I forward it. If I move the email or create a rule to always move mail from particular senders to the Inbox, it moves for a while, but then goes back into the created folder. The first one was called "junk" but it was in addition to my normal junk email folder. When I forwarded all the messages (some were junk, but most were not) and totally deleted that folder, a new one, called "unwanted" appeared that acted the same way. It seems that once one email goes into this folder, then any email from that person also goes into the folder. I have discussed this with the tech person at work. There is no evidence of virus or any other identifiable reason for this to happen. We have searched the Internet and not found anything like this either.

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >