Search Results

Search found 17782 results on 712 pages for 'questions and answers'.

Page 562/712 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    Hi All, I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: http://superuser.com/questions/12955/how-do-i-remove-the-option-to-eject-sata-drives-from-the-windows-7-tray-icon

    Read the article

  • Is git-annex appropriate for my scenario?

    - by Karel Bílek
    I have a git repository with source codes I want to put in the open on github. However, I also have gigabytes of data that I don't want to have in the open and in the repo - they are big, they are proprietary, they are "burdened" with copyrights and so on. However, those are also logically "part of the same project" and I wish to have some control over their history (basically, what git already does). Right now, I have them in the directory "data" in the repository and I have the directory ignored and I resign on getting them to git. However, I have read about git-annex and it seems it can do what I want. So, I have two questions. Is git annex appropriate for me? How exactly should I use git annex for my scenario? Meaning - which commands should I use and how? I have tried to read the official documentation but it talks about use cases that I don't care about. I have the data on one computer only and I don't think I will be moving them soon (it's nice to have the possibility, but it's not why I want to use git annex). Also, the documentation is pretty hard to read.

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Freebsd jail for an small company - checklist - what shouldn't forget

    - by cajwine
    Looking for an checklist for an "small company freebsd/jail server". Having pretty common starting point: FreeBSD jail (remote/headless) for the company: public web, email, ftp server, and private (maybe in the future partially public) wiki (foswiki) 4 physical persons, (6 email addresses) + one admin - others will never use ssh) have already done usual hardening on the host side (like pf, sshguard etc). my major components are: dovecot, exim, apache22, proftpd, perl5.14. Looking for an checklist, what I shouldn't forget. My plan: openssl self-signed certificates for exim, dovecot and proftpd (wildcard keys) openssl self-signed certificate for apache (later will go for "trusted-signed" key) My questions are: is is an "good practice" having one pair of wildcard SSL-certificates for many programs? (exim, dovecot, proftpd) - or should I generate one key for each service? should I add all 4 persons as standard (unix) users, or I should go with virtual users? Asking because: have only small count of users, and it is more simple to configure everything (exim, dovecot) for local users ($HOME/Maildir), plus ability to set $HOME/.forward/vacation and etc. is here some (special) things what I should consider? (e.g. maybe, in the future we want setup our own webmail - will make this any difference?) any other recommendation? Thank you, hoping that this question fit into the http://serverfault.com/faq under the: Server and Business Workstation operating systems, hardware, software Operations, maintenance, and monitoring Looking for an checklist, but please explain why you're recommending it. See Good Subjective, Bad Subjective. related: What's your suggested mail server configuration for a FreeBSD server?

    Read the article

  • Create a PDF that defaults to flip on short edge when printed double-sided

    - by user568458
    We're creating a 2-page PDF brochure with a target audience who will print it on their regular office or home printers. If it is printed on a double-sided printer (common in offices), it'll come out correctly if set manually by the user to "Flip on short edge", but will come out with the second page upside down if default settings are used (flip on long edge). Our target audience aren't very tech-literate, and we've found that even within our own office network there is variation in the location of the 'Flip on short edge' setting - so it isn't realistic to give everyone who downloads the PDF instructions on how to change this setting or to expect everyone to find out how to change the setting off their own backs. So, when creating a PDF (ideally using Adobe InDesign or Acrobat, but if other software or hacking is needed that's fine...), is there a way to configure the PDF file itself so that when printed double-sided with default settings, it flips on the short edge? If possible, it'll be useful supplementary info to know how reliable any such methods are across different PDF readers (e.g. Adobe Reader, Acrobat, Mac Preview, inbuilt browser readers (e.g. chrome), FoxIt, etc). If questions about content creation like this aren't a great fit here, feel free to migrate it to the graphic design stackexchange site - this question seems to fall half way between the two sites

    Read the article

  • Why does my microwave kill the Wi-Fi?

    - by Ohlin
    Every time I start the microwave in the kitchen, our home Wi-Fi stops working and all devices lose connection with our router! The kitchen and the Wi-Fi router are in opposite ends of the apartment but devices are being used a little here and there. We've been annoyed by the instability of the Wi-Fi for some time and it wasn't until recently we realized it was correlated to microwave usage. After some testing with having the microwave on and off we could narrow down the problem to only occurring when the router is in b/g/n mode and uses a set channel. If I change to b/g mode or set channel to auto then there is no problem any more...but still! The router is a Zyxel P-661HNU ("802.11n Wireless ADSL2+ 4-port Security Gateway" with latest firmware) and the microwave is made by Neff with an effect of 1000W (if this information might be useful to anyone). There is an "internet connection" light on the router and it doesn't go out when the interruption occurs so I think this is only an internal Wi-Fi issue. Now to my questions: What parts of the Wi-Fi can possibly be affected by the microwave usage? Frequency? Disturbances in the electrical system? How can setting Auto on channels make a difference? I thought the different channels were just some kind of separation system within the same frequency spectrum? Could this be a sign that the microwave is malfunctioning and slowly roasting us all at home? Is there any need to be worried? Since we were able to find router settings that cooperate well with our microwave's demand for attention, this question is mainly out of curiosity. But as most people out there...I just can't help the fact that I need to know how it's possible :-)

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Booting Ubuntu as VM with KVM on Ubuntu 12.04

    - by CrazycodeMonkey
    I am trying to boot my very first VM using KVM. I have Ubuntu 12.04 installed, i made sure the BIOS had the right virtualization flag enabled for intel processor by running kvm-ok. I have researched this on google and all the instructions that i have found so far are outdated. for e.g. most instructions talk about booting a virtual machine with the following commands qemu-img create -f qcow2 foo.img 100G --- create a virtual disk for your VM kvm --name foo -m 1024 -hda foo.img -cdrom whatever.iso -boot d -- This runs kvm. This command line is incomplete. First you need to be root to run this. Second- it is missing option for the video device. When you run this command you get the following error "Could not initialize SDL(No available video device) - exiting" Googled this error and looked it up on stackover flow http://stackoverflow.com/questions/4841908/sdl-init-failure-reason-is-no-available-video-device The answer provided here does not work on Ubuntu 12.04 Googled this problem further and found out that i need to specify a video device so I finally ran the following command sudo kvm --name mymachine -m 8096 -hda myimage.img --cdrom ubuntu.iso -boot d -vga cirruss -k en-us -vmc :0 This was after I had created the myimage.img image on the drive. Now this command does not give me an error but it just hangs. Does anyone have clear instructions on how to run a VM using KVM on Ubuntu?

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Slackware - Assigning routes (IP address ranges) to one of many network adapters

    - by Dogbert
    I am using a Slackware 13.37 virtual machine within VirtualBox (current). I currently have a number of Ubuntu VMs on a single server, along with this Slackware VM. All VMs have been set up to use "Internal Network" mode, so they are all on a private LAN, and can see each other (ie: share files amongst themselves), but they remain private from the outside world. On on the these VMs (the Slackware one), I need to be able to grant it access to both this private network, and the internet at large. The first suggestion I found for handling this is to add another virtual network adapter to the VM, then set it to NAT. This results in the Slackware VM having the following network adapter setup: -NIC#1: Internal Network -NIC#2: NAT I want to set up the first network adapter (NIC#1) to handle all traffic on the following subnets: 10.10.0.0/255.255.0.0 192.168.1.0/255.255.255.0 And I want the second virtual network adapter (NIC#2) to handle everything else (ie: internet access). May I please have some assistance in setting this up on my Slackware VM? Additionally, I have searched for similar questions on SuperUser and Stackoverflow, but they all seem to pertain to my situation (ie: they all refer to OSX, or Ubuntu via the use of some UI-based tool). I'm trying to do this on Slack specifically via the command-line. Thanks!

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • vSphere - datastore falling off a host

    - by Chadddada
    Recently we have been running the vCheck powershell script daily in order to help in monitoring our vSphere ESX 4.0 environment. One of the oddities that we have been seeing is that some of the datastores on the SAN don't always show up on every host. Our hosts are connected redundantly, via FC, to some brocade FC switches, which then connect via fiber to our EMC Ax4 SAN. While all the datastores are presented to each host we have, and they see them initially, they sometimes seem to fall off and are no longer visible. It easy enough to rescan for datastores and add them back to the hosts the hosts but this seems to be an error. Has anyone else seen this or know why it may be happening? Responses to questions: 1. Is it always the same ESX servers that lose their connection? – Scott Warren No this happens randomly on random hosts. If a VM is running on a particular host, of which the VM's disks are on a SAN datastore, then that datastore won't disappear. It seems to happen if a host doesn't touch a datastore for a bit and it just forgets about it.

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Davical + LDAP + NTLM

    - by slavizh
    I have set up a Davical server on CentOS. I've configured it to use LDAP and the users use their usernames and passwords to authenticate to the Davical server. I am using Lightning as client software for calendaring. Using Lightning requires entering username and password everytime, so I decided to set NTLM. I want my users who are logging in the domain to use the calendar server trough Lightning without entering username and password. I've set up NTLM on the Davical server. But when a user trys to reach the calendar trough Lightning first the server asks for NTLM username and password and then ask for the LDAP username and password. It becomes something like double authentication. The problem is that NLTM requires domain\username and passowrd and Davical trough LDAP requires only username and password. So my questions are: Is there a way to change something in Davical so that Davical trough LDAP to requires domain\username and passwords authentication? That way may be trough NTLM the second authentication will proceed silently and the users will user Lightning without entering usernames and passwords Is there a way I can make this double authentication to become one and to use only NTLM? P.S. We have Samba domain with LDAP server and our users use Thunderbird for their mail and I want to put Lightning too. That way they will have calendar service. But I don't want they to enter username and password for the calendar every time they log in. I know they can save that password but that is not an option for my organization.

    Read the article

  • ssh Prompts For Password After Account Unlocked - Despite ssh key?

    - by user1011471
    Here's what happened: I set up ssh key so that user could ssh from A to B without a password. I got user's password wrong in some other context too many times, and user's account got locked out. (IT uses Active Directory here) IT unlocked the account. Concurrent to the unlocking, a script was running, calling something like ssh user@B some-health-check-command every 5 seconds or so -- which seemed to work fine before I caused user to get locked out in step 2. IT reports user reliably gets locked out a short time after each unlock attempt. I thought the ssh key would allow ssh user@B some-command as long as the account is not locked. But it behaves as if, when user gets unlocked, B suddenly asks for a password and since my command repeatedly runs without supplying a password, the account gets locked out after 5 attempts. Account cannot be accessed at this time. Please contact your system administrator. My questions are... Is that what's happening? Or: what's happening? More importantly: How can I reconfigure things such that my script doesn't cause problems? Can I accomplish what I want without having to install Expect? (I don't know if I have permission to do so) Other notes: Not using ssh-agent currently. The ssh command is running on our Jenkins master, a linux box. A and B are Mac OS X. user is managed in Active Directory and normally can sign into all three machines. Other than these things and the ssh key I set up, everything else has the default configuration as far as I know.

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • Converting a Windows 2003 server

    - by Jim Bass
    We have a legacy database system based upon MS SQL running on Windows Server 2003. The client software will only run on Windows XP. We have recently had success converting a client into a virtual machine and running it under Fusion on Mac minis. So far, it is working incredibly well. So well, in fact, that we are now considering trying to convert the server to a virtual machine. This raises several questions, though: 1. The server uses a raid array. Does the VM virtualize the raid array? I only ask because in my experience Windows products don't like it when you change core hardware. 2. Is there any reason why running SQL server on a virtual machine won't work? It will be up 24/7. 3. Is there a different converter for servers? 4. Will I have to track down the licensing for MS SQL and Server 2003 or will they come across ok? 5. The company that designed the software is no longer in business. There is some fear that the software is somehow tied to the hardware configuration. We bought the hardware, but their engineers came out and configured the system. Will the virtual machine be able to spoof particular chip sets? Thanks! Jim Bass

    Read the article

  • Choice of an OS for a home ZFS NAS

    - by OlafM
    I am preparing a home NAS with an old Athlon 64 X2 3800+, 4 GB ECC RAM, Asus M2V MX motherboard, and a single 3 TB WDC Green (another one as mirror may be installed in the future). It's the cheapest solution I found that includes ECC memory and the higher energy consumption is offset by the lower (zero) cost of acquisition. The system will be used for: music storage and stream to other desktop computers; storage of the scanned dia slides (3-4k slides, 180 MB TIFF each one plus reduced quality JPEG version); stream of these photos to a local iPad 2 (maybe Plex App? not yet sure); (one additional) remote backup via rsync/ssh or ZFS send/receive. It will be controlled via remote ssh, maybe VNC, no monitor attached. Absolute requirement is a reliable ZFS solution, plus the ability to easily install packets/software/virtual machines and to update remotely (I will be the admin and I don't live near the NAS). I have mainly three options: NAS4free/FreeNAS OpenIndiana Solaris Express 11 (yeah yeah I know the license requirements, I will write a perl script on it to count it as development machine). Problems: NAS4free/FreeNAS (I tested only NAS4free) required embedded installation for remote upgrading, but full install for easy addition of software packets. Since I need at least AirVideo Server (linux/win) and Plex App (win/linux) to stream the photos and some videos to iPad (they both require virtualbox), but I cannot be there to install updates, NAS4free/FreeNAS are excluded. http://www.nas4free.org/general_information.html explains the issue: embedded can be remotely updated, full cannot. Solaris has also another advantage: Crashplan client supports Solaris and I'm already using it for other backups. I would like to leave the option open, even if I will be doing backups probably through zfs send/receive. NexentaStor was left out because zfs send/receive are not included in the free version. The question is now Solaris 11 Express over OpenIndiana. To ease the management, I will be using http://www.napp-it.org Which one would you suggest and why? I found lots of informations and it's difficult for me to decide. I think (from the napp-it manual) that Solaris has some additional options for SMB shares, but are they really needed at home? I think I won't even use ACLs, since normal unix-style permissions are enough. OpenIndiana has maybe more frequent updates (Solaris offers only security updates between releases), but again, do I need them? I don't think so. Moreover, this is a NAS that has to work and nothing else, I cannot risk having problems that require me to access the server. Isn't OpenIndiana a bit more... cutting edge (in the Solaris world)? I'm just asking, no need to focus on this for the answer :-) I would limit myself to these two options (SE11.1/OI) also because I will be making a NAS for me in the future (where high performances with Mac shares are also required) and Solaris has kernel support for AFP. I will use this server to gather experience as well. After this long question, thanks in advance! If you need additional info, let me know and I will update this post. UPDATES Given the first answers, I will strongly suggest the person paying the hardware to insert a second HD. Better 2x2TB than 1x3TB (3 TB is oversized anyway). I was trying to keep the initial costs down to spread them over a longer period, but better having something good from the beginning.

    Read the article

  • How to embed/hardcode SRT subtitles into mp4 videos with VLC?

    - by Jens Bannmann
    I'm looking for a way to "burn in" or render/rembed/hardcode subtitles (from an SRT file) into an MP4 video with VLC. But no matter what options I use, it never works properly. I get a file that plays video way too fast (audio is normal), or one that plays normally, but actually does not have embedded subtitles. Also, with some options (like the one below) it does not play in QuickTime, only in VLC. So the main question is: how can I make this work in VLC? Secondary questions are: How do I decide which options I should set? Which settings are best if I want to leave the file bitrate etc. the same as much as possible, only embed subtitles? It seems I cannot leave the field empty or Video/Audio unchecked, so I guess I would first need to figure out the original audio and video bitrate. What do the "Scale" and "Channels" options mean? ... none of which are answered within the VLC documentation. For example, this is one set of options I used in the "Advanced Open File…" dialog: Advanced Open File… myFileName.mp4 [ ] Treat as a pipe rather than as a file [x] Load subtitles file: mySubtitleFileName.srt [ ] Play another media synchronously [x] Streaming/Saving Streaming and Transcoding Options [ ] Display the stream locally (o) File [outputFileName.mp4 ] [ ] Dump raw input Encapsulation Method: (MPEG 4 ) Transcoding options [x] Video (mp4v ) Bitrate (kb/s) [256 ] Scale [1 ] [x] Audio (mp3 ) Bitrate (kb/s) [128 ] Channels [1 ]

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • Setting up VPN with Snow Leopard Server and Linksys router

    - by SueP
    I'd like to get VPN going so I can log in to the office securely from home. I'm using Snow Leopard machines everywhere, and currently have Airport Extremes set up at home and at the office. I have a mac mini with Snow Leopard Server that I'm going to move to the office to act as my server. I just bought a Linksys 4-port router because it says it does VPN (model RVS4000). My problem is, I don't have a clue how to set this thing up, and the more reading I do, the more confused I get. Do I need two of these routers, one at each end? My laptop and iPad claim they can do VPN, so I was assuming I only needed one VPN router? At this point, I literally don't know what questions to ask, or where to plug this thing in. Presumably, between the modem and the airport, but...? If somebody can walk me thru some really basic setup, I'd be very grateful. Right now, I feel like going outside and screaming for a while. But that might attract the local cougar, and after the prints I saw on the arena this afternoon, I don't want to draw its attention. :-)

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • RPC Server Unavailable on Hyper-V cluster when moving resources after the host adapter has failed

    - by Doug Luxem
    On a Windows 2008 R2 SP1 cluster running Hyper-V, a lost network connectivity on the primary host interface. The interface was rapidly flapping up and down, and this was later determined to be caused by a faulty switch port. As this was a clustered server, the host interface was not fault tolerant (seeing as how the whole server was fault tolerant), so connectivity to the host was going up and down. The Hyper-V guests were completely unaffected by the network outage as they used a dedicated trunk on the server separate from the host interface. Additionally, dedicated interfaces for the cluster and live migration networks were fine. In order to diagnose the server, I tried to move all resources (Hyper-V Guests) to other nodes through Failover Cluster Manager. These moves failed with an error RPC Server Unavailable. The only way to move resources was by shutting down the guests, stopping the cluster service on the Node A, allowing other nodes to take ownership of the resources, and restarting the guests. A few other notes: All nodes have Client for MS Networks and File & Printer Sharing enabled on the Cluster and LM networks. Node A was accessible over cluster and LM networks from other nodes (these are private, cluster-only networks); pingable, CIFs, etc. Accessing \\NODEA is done over the Host adapters, as you would expect in this case and is the reason for the RPC Server Unavailable error with that adapter being down. My questions here are - Is there a way to still use Live Migration in a failure scenario such as this to prevent shutting down the Hyper-V guests? How can the network be reconfigured in the future so that the cluster service attempts to use the cluster and/or live migration networks to issue the RPC requests?

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >