Search Results

Search found 7855 results on 315 pages for 'solutions'.

Page 239/315 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Use external display from boot on Samsung laptop

    - by OhMrBigshot
    I have a Samsung RV511 laptop, and recently my screen broke. I connected an external screen and it works fine, but only after Windows starts. I want to be able to use the external screen right from boot, in order to set the BIOS to boot from DVD, and to then install a different OS and also format the hard drive. Right now I can only use the screen when Windows loads. What I've tried: I've tried opening up the laptop and disconnecting the display to make it only find the external and use the VGA as default -- didn't work. I've tried using the Fn+key combo in BIOS to connect external display - nothing I've been looking around for ways to change boot sequence without entering BIOS, but it doesn't look like it's possible. Possible solutions? A way to change boot sequence without entering BIOS? Someone with the same brand/similar model to help me blindly keystroke the correct arrows/F5/F6 buttons while in BIOS mode to change boot sequence? A way to force the external display to work from boot, through modifying the internal connections (I have no problem taking the laptop apart if needed, please no soldering though), through BIOS or program? Also, if I change boot sequence without accessing external screen, would the Ubuntu 12.1 installation sequence attempt to use the external screen or would I only be able to use it after Linux is installed and running? I'd really appreciate help, I can't afford to fix the screen for a few months from now, and I'd really like to make my computer come back to decent performance! Thanks in advance!

    Read the article

  • Why has my computer started to make noises when I turn it on after I put it into sleep mode for the first time a week ago?

    - by Acid2
    I would usually have my pc on all day and fully shut it down at night time before I went to bed. I decided to put it into sleep mode instead the other day and everything was fine but when I woke it from sleep, I was presented with the blue screen of death and it started with some weird noise that sounded like some spinning part was off balance or possibly hitting something periodically. Sounds like it could be a fan or maybe the HDD. I'm not sure why sleep mode would mess up the hardware. Anyway, now sometimes, randomly, when I turn my computer on from a previous shut down, I still get to hear the noise but the start-up is normal. Sometimes I don't hear anything for the entire duration while I have it on and sometimes it goes away after a few minutes and sometimes it doesn't and I have to restart, like it isn't going away right now. I can hear the noise as I type this. Anyone got possible solutions? I don't want to open the system and mess up other stuff. I'm also not sure if I should take it somewhere to have it fixed - it might not make the noise then and work like normal and nothing would seem like needing to be fixed. Add: I'm running Windows 7, if that's of any relevance.

    Read the article

  • Offline productivity

    - by Frank Meulenaar
    On some days I'm commuting 2hs (oneway) in the train. I don't have any mobile internet nor is there always WiFi service in the train. Because of security reasons I can't do any work in the train so I'm trying to work on my geek time. I'm looking for general solutions on how to do this (I'm on FireFox/Windows but I don't think it matters) Email works perfectly with gmail offline. It syncs directly when online and remembers complicated stuff. So far I used the ScrapBook plugin to store an website. It works good, but I have to download my favorite news page every day again - I want it to sync as soon as possible. It would even be more awesome if I could click a page on my desktop and my laptop would sync as soon as it has the chance. (edit: maybe the autosave plugin for scrapbook can do this) Similarily, I use the Downloadhelper plugin to download youtube vids, but I'd like something that automatically downloads videos from a given channel. Any tips are welcome. So far my early morning schedule is: wake up, power on laptop, make coffee, power off laptop and leave within 10 minutes (enough time for GMail to sync) but I can imagine a system where my laptop stays on during the night (or boots before I wake (and makes me coffee :])).

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Hosting multiple websites from home

    - by dean nolan
    I have just been accepted for Microsofts Wevsite Spark program which I mainly got for the tools, Visual Studio, Blend. I also have a few of my own websites, personal and a couple of business ones. I also work freelance and sometimes I would like a place to just put a demo up of a clients project. The websites I currently have are all on differnet hosting provders and domain registrars. The WebsiteSpark comes with Windows Server 2008 and SQL Server 2008. It would be really advantagous of me to have all these in one place but also so I have complete control over the database and the environment. So I am thinking over the next 4-6 months of migrating all this to my own server that I will host from home, or maybe even setup at home and then store in a proper datacentre. I was wondering what steps I should take and what to be aware of, specifically: 1) having all these different websites on one computer and having the url got to the proper place. 2) Cost effectiveness? Having the server in home as apposed to datacentre. Most solutions I see charge over £1000 a month to have a machine in datacentre. This is mostly for my own ease of management and shared hosting which I currently have is very limited configuration wise. Would getting a server in house be beneficial for then upgrading to the cloud? What measures should I take with my ISP? I know this is a lot I've asked but just even links to good articles would be good. Thanks

    Read the article

  • Combining AD permissions with FTP

    - by user64204
    We're using Windows Server 2008 with Active Directory controlling access to a network share. We've setup FTP so that people can access that share from outside (we used to use the PPTP VPN but for various reasons we need to switch to FTP). So far here is what we've managed to implement on the FTP: -The network share is used as the FTP root (defined as a UNC) and that is working fine. -AD authentication is working fine (wrong password and you stay out, good password you're in, password management in AD correctly synched with the FTP). -AD permissions are failing: the AD permissions on the content of the FTP root are ignored: it's either a user only has read or write access, but this applies to the whole FTP root, which obviously isn't suitable since that FTP root is initially our network share and files/folders have different AD permissions depending on people's groups... Whether we set the permissions through the share OR the FTP management interface, AD permissions are never enforced. Q1: Is that normal? Q2: If so what solutions exist to combine AD permissions with FTP on MS server 2008? Q3: If not, where should I look to fix the configuration?

    Read the article

  • Losing internet connection after few minutes (5-10 maybe)

    - by Korchkidu
    I took a computer that was not updated for months. Internet was working just fine so basically, I updated zonealarm, avast and installed all windows updates and especially SP3. After that, when I reboot, Internet works fine but after few minutes, Firefox says that the connection was reset. IE does not work either. However, my connection is still up and running as I can make a ping on www.google.com for example. Here are the solutions I tried with no success so far: 1) Uninstalling SP3; 2) Uninstalling IE8 and IE7; 3) Manually setting DNS and IPs; 4) Removed proxy settings from Firefox and IE; 5) Restarting DNS and DHCP related services; 6) Reset TCP/IP with netsh int ip reset c:\resetlog.txt; 7) Updated my ehternet card driver; 8) Restarted, tweaked all the connections in any directions and any configuration possible I believe; 9) Disabled Zone Alarm and Avast; Also, update kb981793 always fails on install. Please, help me as I spent two days already on this and I cannot find any solution. If I cannot fix this problem tomorrow, I will have to format-reinstall everything. Thanks for any help. Regards.

    Read the article

  • Ubuntu 10.04 on virtualbox gives error: Target filesystem doesn't have /sbin/init \ No init found. Try passing init= bootarg

    - by Philip
    I'm a linux newbie and the only reason I have it installed is so I can stop having Windows incompatibility issues with Ruby on Rails. Having said that, it sure has been nice, and much faster, and I don't think I'll be doing any Winrails stuff anytime soon. So I created a virtualmachine using virtualbox and have had ubuntu on it for the last 3 weeks. Recently ubuntu asked if it could update a few things, I clicked 'ok'. Now it won't boot and I get this error: *mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory ... Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg BusyBox v1.13.3... (initramfs) _ * So I cruised the forums and there are a variety of solutions, but they all have to do with booting from the live cd. (which I assume is the ISO image I used to install ubuntu in the first place). But when I boot from that CD, it just hangs on the ubuntu screen, and the little dots keep cycling white to red, but it hung there for an hour so I think it was stuck. Not sure what I can do; can I do anything from the busybox shell (or whatever that is) to fix things? The thing is, it took about 10 hours to get everything the way I needed with all the gems and whatnot. And I didn't really write down what I tweaked, and I'm middle aged, so all that information has leaked out by now and I don't want to do it again. I'd really like to repair my existing install. One question you might have is, is there something wrong with the ISO? I don't think so, because I made a new virtual machine and used that same iso file to install a fresh ubuntu. Any help much appreciated. Phil

    Read the article

  • Cacti Login Page: Infinite loop occurs

    - by beicha
    Apache 2.2.3 | PHP 5.1.6 | MySQL 5.0.77 I followed cacti installation guide to install latest cacti 0.8.7h on CentOS 5.5 (64-bit). The installation of PHP/Apache/MySQL went smoothly until I finished the setup, and came to the login page. I can login http://.../cacti/index.php with admin account but the new page is redirected to the same login page with the message "Please enter your Cacti user name and password below" This is a infinite loop! If I use a wrong admin password I get the correct error message "Invalid User Name/Password Please Retype". [Same problem here] If I login use Guest/guest account, "Error: Access Denied, user account disabled." displays. The Cacti log file (./cacti/log/cacti.log) is empty. I Googled and seems this problem has existed for a long time, but no followup solutions were found on the forum posts I found. Anyone can help me on this problem? If more information needed, please let me know. Nov 18, 2011 UPDATE: I re-installed Cacti, this question remains UNSOLVED.

    Read the article

  • External Dell Display doesn't work with MacBook Pro (2011) after Thunderbolt Firmware Update (1.0 and 1.2)

    - by tom
    Today two Thunderbolt Firmware Updates (1.0 and 1.2) became available for my MacBook Pro (Early 2011). After installing both, my external monitor, a Dell U2713HM, does no longer work. The system detects the display, but the display shows only black. An Apple Thunderbolt display works fine and a MacBook Air can use the Dell monitor without problems. My MacBook Pro can use the Dell monitor just fine when I boot from a USB stick. Therefore, clearly the Thunderbolt Firmware Update seems to be the problem. Does anyone have the same problem? Any solutions or workarounds? I guess there is no way to remove a Thunderbolt Firmware Update once it's installed, right? Update 24.10.2013: Is there no one else with this problem? In the meantime I tried three different cables – none worked. My colleague with the same generation MacBook Pro also can't use my display after installing the firmware update. All colleagues with MacBook Airs and newer MacBook Pros (all didn't receive the firmware update) can use the display. Update 29.10.2013: Wow, ok today my new MacBook Pro Retina 13' (Late 2013) arrived. Guess what, I cannot use the display with it. Only HDMI works – not with the full resolution.

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • How to speed up request/response to django using apache or another solution?

    - by jbcurtin
    Hey all, I'm mainly a developer, but every now and again I jump into the sys-admin position. For the most part I've gotten away with deploying php and python apps using apache. I write today because I'm starting to research faster alternatives to apache, yet still have some of the core features I require like put and delete methods and the ability to connect to a socket via apache. ( This I have not tried, but might be a nice whistle if I ever employ comet on my apps. ) As you've probably guessed, I use javascript exclusively for all my websites utilizing deep linking for SEO support. The main areas that I'm looking to increase performance is the connection between the django apps and the web server to the client response. Every day I work my best to keep the smallest memory foot print as possible, however I am getting to the end of my rope when it comes to working with apache. In general, keep in mind that I'm just starting this research so I'm looking more for material to read then solutions at this moment. My main questions: Am I missing something about apache that makes it faster then everything else? What would be a good server environment to deploy just static files one? What are some of the leading open-source and paid alternatives?

    Read the article

  • How can laptop keyboard keys be removed and replaced?

    - by Lord Torgamus
    I'm trying to fix a laptop keyboard that has issues with keys on its left side. Just by feel, it's clear that something sticky got under there. There could be something crunchy too, but that might just be the sound of the key's spring releasing itself from the sticky. I don't know the cause because it's not my computer and the owner isn't sure, but I'm guessing soda spill for now. The computer is an HP dv2500. I've removed the keyboard and blown under it but that hasn't helped. I didn't use compressed air because I just don't have any available, but I suspect it wouldn't help with sticky. So, I'd like to pop they keys off and clean with damp cotton swabs or similar. Is there a proper way to remove the keys? I've found some instructions via Google for non-laptop keyboards, but they don't seem like they'd work for me. Alternate solutions to the problem also welcome, but I've been curious about how to remove the keys for some time for other reasons.

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • xrandr fails when 3rd monitor has higher resolution

    - by Pi3cH
    I tried many combinations with xrandr command under lubuntu 12.04 to setup my three monitors DVI (DELL 1) left detected as HDMI1 HDMI (LG E2290) middle detected as HDMI2 VGA (DELL 2) right detected as VGA1 I can get the display with fix 1280x1024 on all the monitors. But once I setup 1280x1024 + 1920x1080 + 1280x1024, I get blank screen on all the monitors. Sometimes it throws crts fail error instead of blanking out. Anyone have similar issues? any solutions/workarounds? P.S. I can setup two monitors using 1280x1280 and 1920x1080 P.S.S. HDMI2 required at least 1920x1080 to display sharp picture. Outputs (it seems graphic card supports up to 8192x8192): xrandr Screen 0: minimum 320 x 200, current 3840 x 1024, maximum 8192 x 8192 VGA1 connected 1280x1024+2560+0 (normal left inverted right x axis y axis) 338mm x 270mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI1 connected 1280x1024+0+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 HDMI2 connected 1280x1024+1280+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0 + 1680x1050 60.0 1280x1024 75.0 60.0* 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) 3 CRTs (0,1 VGA, 0,1,2 for other HDMI) xrandr --verbose VGA1 connected 1280x1024+2560+0 (0x47) normal (normal left inverted right x axis y axis) 338mm x 270mm Identifier: 0x42 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 0 CRTCs: 0 1 ... HDMI1 connected 1280x1024+0+0 (0x47) normal (normal left inverted right x axis y axis) 376mm x 301mm Identifier: 0x43 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 1 CRTCs: 0 1 2 ... HDMI2 connected 1280x1024+1280+0 (0x47) normal (normal left inverted right x axis y axis) 477mm x 268mm Identifier: 0x44 Timestamp: 51324 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 2 CRTCs: 0 1 2 Below command fails: xrandr --output VGA1 --auto --output HDMI2 --auto --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Below command passes: xrandr --output VGA1 --auto --output HDMI2 --mode 1280x1024 --rate 60.0 --left-of VGA1 --output HDMI1 --auto --left-of HDMI2 Graphic card VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09)

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Google Apps, SPF, softfail problem (validates with validation tools, but still softfails otherwise)

    - by mq.chen
    Hi, I guess this is probably a commonly asked and boring question but I'm really at a loss and I don't know what else to do. This might be a duplicate of other questions, but none of the solutions worked for me. I've Googled around and read just about anything I could find but I'm still puzzled as to why it doesn't work. The gist of my problem is that I have set-up Google Apps for a client of mine with the domain fintan.dk. Everthing works just excellent, except emails sent from *@fintan.dk (either with the Gmail web-interface or desktop client) to a non-Google Apps email gets a softfail (I have sent to my University email, an email hosted at MediaTemple and even Hotmail). The emails gets a pass when sent to a Google Apps or Gmail address though... (All emails from that domain are sent via email clients.) So this is what I have done so far: I've added the SPF record Google recommended (v=spf1 include:_spf.google.com ~all), waited several days hoping it would a DNS update delay problem. Now, three days later there is no change. I have verified the settings in the desktop clients several times. I have validated the records with validation tools like the SPF Query Tool, [email protected] and [email protected]. All of them validate and gives a pass, saying there shouldn't be a problem, but strangely there still is. So, I really don't know what else to do. Any help is very much appreciated. Thank you in advance!

    Read the article

  • Upgrade Centos 5 tot PHP 5.2 or 5.3 [recommended way?]

    - by solid
    We are using Zend Framework and in version 2, php 5.2 will be the minimum requirement. We love CentOS and we'd like to keep using it, but PHP 5.1 just won't do anymore when developing web applications with Zend framework. I found several links to solutions to upgrade with external repositories. http://serverfault.com/questions/106801/recommended-method-to-upgrade-php-5-1-6-to-5-2-x-on-centos-5 http://www.webtatic.com/blog/2009/05/installing-php-526-on-centos-5/ http://www.webtatic.com/blog/2009/06/php-530-on-centos-5/ We'd like to see another solution with the use of an "official?" CentOS repository if any is available. We only need to upgrade PHP, the rest of the CentOS setup is fine the way it is. For us, it's important however to keep the YUM cycle intact using the normal repositories. So in short: is it even possible to upgrade only PHP by using an external repo or otherwise? While still upgrading all our other packages safely through normal yum usage? Thanks for your help!

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • Recent DDE / file open issue with Office 2007 affecting only a few machines, is a Windows Update to blame?

    - by kafka
    All our workstations run Windows 7 Professional 64 bit. It started with one, then another, then another couple of machines having a problem accessing Word files locally and on the network. This doesn't happen on my machine though. Affected users get the error message 'There was a problem sending the command to the program'. I've Googled for solutions, but none of the answers worked. They suggested deleting certain registry keys; unregistering and reregistering the program for DDE; resetting the way that the shell opens .docx programs etc. each to no avail. As it affects local and network shares I believe the problem lies with the clients, and not the server, and I'm starting to suspect that there could have been a recent Windows Update which has caused this. I've tried comparing the updates on my working machine with an affected machine, but I can't immediately see any major differences. Has anyone else recently encountered this problem? What are the best steps to take to further isolate what could be causing this?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • SQL Server 2012 Maintenance Plan can't modify

    - by Crazyd
    Click on any created Maintenance Plan: TITLE: Microsoft SQL Server Management Studio Value cannot be null. Parameter name: component (System.Design) BUTTONS:OK Create a new Plan I get this error: TITLE: Maintenance Plan Wizard Progress Saving maintenance plan failed. ADDITIONAL INFORMATION: The SaveToSQLServer method has encountered OLE DB error code 0x80004005 (Unspecified error). The SQL statement that was issued has failed. The SaveToSQLServer method has encountered OLE DB error code 0x80004005 (Unspecified error). The SQL statement that was issued has failed. BUTTONS:OK Edit an already created Backup Plan: Error 1 Error loading 'BackupDb' : The LoadFromSQLServer method has encountered OLE DB error code 0x80004005 (Unspecified error). The SQL statement that was issued has failed. . server=SERVER;package=Maintenance Plans\BackupLeadsDb; 1 1 Attemped Solutions: I've changed password for SA Account; I use Windows Authentication to log in; and I've registered C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\DTS.dll. Repair SQL Server 2012, Uninstall/ReInstall SQL Server 2012.

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated.

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >