Search Results

Search found 28590 results on 1144 pages for 'best'.

Page 941/1144 | < Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >

  • Hyper-V VM Lab + RRAS + RDP

    - by Dennis Evans
    My background is primarily .NET Development with some System Administration skills. I'm trying to set up a VM Lab for me to test System Applications I'm developing but I've only ever done System Administration in already set up environments; I've never set up my own. My current setup: Server 2008 R2 Hyper-V Host on physical machine (only role enabled) with two NICs. First NIC dedicated for Management w/ DHCP address from company's network. Second NIC dedicated to RRAS VM w/ DHCP address from company's network. RRAS VM has two NICS, one is virtual private internal only NIC w/ static entry. The other is the physical NIC mentioned above. I've joined it to my VMLab.net internal domain. My Active Directory Domain Controller server (ADCT) also runs DNS, DHCP, and Certificate Services which I'm familiar with but don't understand completely. RRAS is already set up with NAT to provide the private internal network with Internet access. What I would like to do is be able to RDP into the servers/computers on the VMLab.net domain from my computer. Do I need to add the Remote Desktop Services role and enable the Remote Desktop Gateway service on RRAS in order to do this or is there a way to set up port forwarding on RRAS to just allow a direct connection to the internal servers...or both? What would the best practices be here? Network Diagram http://i.stack.imgur.com/4qfnk.png

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Convert a cassette tape recording to digital format

    - by Optimal Solutions
    Has anyone been successful with transferring audio cassette tape recordings to a digital format? I would like to preserve old cassette tape recordings of my grandparents to some digital format: MP3, WAV, etc... The quality of the tapes are mediocre. I think I can handle the quality restoration but getting the audio from tape to digital is my question. Below is a list of the hardware that I can work with: Cassette Deck: I have a Technics stereo cassette deck model RS-B12. It has separate left and right IN and OUT RCA type jacks on the back. In the front it has a headphone phono jack, plus left and right mic input phono jacks. On the computer side: -I have a Windows Vista PC with no additional software other than what came with the machine from Costco. No sound editing software that I can see. There is no sound card on the PC. On the front panel there is a mini-phono mic input jack and there are several different types of in/out mini-phono jacks on the back. In addition, USB and Firewire. I also have access to a new (2009) iMac with a mini-phono input jack for a powered mic or other audio source and GarageBand that has come with the computer. In addition, USB and Firewire. What are my options for getting these cassette recordings into a digital format? Whats the best format? What sort of wires would I need and will I want to utilize the USB or Firewire or can I simply use the audio inputs on the PC (or Mac) to receive the audio stream?

    Read the article

  • Options for small windows network setup without dedicated server?

    - by Mitch
    I'm very weak on networking and hope someone can point me in the right direction: I have written some windows client/server software which incorporates a database which is located on a windows server. I have a test installation running at a customer's office where the server has a static IP address. In this case its easy for the clients to access the database because of the fixed IP address. Also, customers with network servers generally have specialist support staff to set up my software, so its not such a problem for me. However I also need to offer the software to customers who have small offices with less than 10 PCs and no dedicated network server. In this case I want the customer to be able to nominate one PC as the database "server" and install my software and have the clients access it. But in this situation I believe the "server" PC may not have a dedicated IP address. Q1: What is the best way to set this up simply and make it work? Can I reliably reference the "server" by using its name, or is there a way to assign dummy fixed IP addresses? Ideally this needs to be workable on small networks running a mixture of XP/Vista/Windows7 as my target market may well have mixed OSes etc. I guess this would be akin to home networking? Many thanks Mitch

    Read the article

  • Multiple static WAN IP addresses to single LAN subnet

    - by Jessy Houle
    Below is my home network topology. I currently have 5 static IP addresses, 3 of which are in use by 3 routers. These routers in-turn subnet internal networks and port forward. I use my SSL VPN appliance to remote home from work or on the road. At this point I can remotely administer my Windows Server. I know the network is setup wrong, I was matching existing hardware the best I knew how. http://storage.jessyhoule.com.s3.amazonaws.com/network_topology.jpg Ok this said, here is the problem... One of my websites on my Windows Server now needs to be secure (SSL using port 443). However, I'm already port forwarding port 443 to my VPN appliance. Furthermore, if I'm going to have to reconfigure the network, I would really like to be able to use the SSL VPN to remotely administer all machines. I mentioned this to a friend of mine, who said that what I was looking for was a firewall. Explaining that a firewall would take in multiple static (WAN) IP addresses, and still allow all internal devices to be on the same network. So, basically, I could supply my SSL VPN appliance it's very own static (WAN) IP address routing, and yet have it on the same internal network (192.168.1.x) as all my other devices. The first question is... Does this sound right? Secondly, would you suggest anything different? And, finally, what is the cheapest way to do this? I am started down the road of downloading/installing untangle and smoothwall to see if they will do the job, hoping they take multiple static (WAN) IP addresses. Thank you in advance for your answers. -Jessy Houle

    Read the article

  • Which DNS settings are used when setting up your server

    - by Saif Bechan
    I have a server and want to run my own name server service. Now I have set it up already and it works not, but I do not know where the exact settings are stored. On my server I use Plesk. When I edit DNS settings there I think it is stored in named.conf. Named is installed on the server, and BIND. Now I also have a panel from my registrar. This is separate from my server. Both places I can add the normal MX,A,CNAME, etc records. Now where is the best way to place this settings. Currently I have the same records on both places, on the server and at the registrar panel. I am correct to just add all the records at the registrar panel, and remove everything from within PLESK, and just don't run DNS on my server, because it is already done in the registrar panel. Or should I add the records in both places.

    Read the article

  • Monitoring whether Google Apps email address is reachable

    - by Acorn
    Backstory: I bungled things a bit the other day, and inadvertantly deleted the DNS overrides for my domain including the MX records that point to Google Apps, causing 2 days of lost emails. What I want: I want to be able to monitor the email address/account so that I can be alerted if for any reason something has gone wrong and emails aren't arriving. Thoughts: I was thinking there might be a way to test the email without having to send an actual message. Does this exist? This wouldn't help if the DNS has reset itself to a different mailserver would it? The other idea was sending periodic emails to check the address it working. How would you automate this? You'd need to somehow check that the email address had arrived as well as checking if it had bounced. Are there any scripts that exist that would do something like this? What would be the best method? Maybe a combination of checking that the MX records for the domain are set to what they're supposed to be set to, and sending automatic test emails to check that things are still functioning on the Google Apps end?

    Read the article

  • How do I install git/git-svn on RHEL5 with a custom perl install?

    - by kbosak
    I've had nothing but trouble trying to install Git on RHEL5. First I tried from source, but ran into several issues with installing the docs. There appeared to be missing libs and such for parsing xml that I couldn't figure out how to get installed and recognized. Then I tried using the EPEL yum repository and was able to install git and its docs but now git-svn is not working. It complains about not finding the perl modules Git.pm and SVN/Core.pm. When I set the GITPERLLIB environment variable to the location of those libs it seg faults. Some background: RHEL5 came with perl 5.8.8, but we wanted to use 5.10 so I installed that from source (to a custom location). Someone then symlinked the system perl binary to this newer version of Perl to make sure nobody uses the wrong version. Each developer also has their own build of Perl. So I'm wondering what's the best way to install Git on this system and have both the docs and git-svn working correctly for each user. Unfortunately I'm a developer and not as good with system administration so take it easy on me.

    Read the article

  • Monospace font which supports at least both of Korean hangul and the Georgian alphabet?

    - by hippietrail
    Being both a language enthusiast and a programmer, I find myself often doing programming or text processing involving foreign language alphabets and scripts. One annoyance however is that CJK fonts (those which support Chinese, Japanese, and/or Korean) usually only contain glyphs for Latin, Greek, and Cyrillic at best. Often the Asian glyphs will be beautiful but the other glyphs can be quite ugly. Just as often in text editors you can only choose a single font, not one for CJKV and one for other, which will be each used for rendering the appropriate characters. Korean is one of the languages I'm most interested in currently. I only need hangul / hangeul for monospaced editing, hanja isn't common enough to be a problem. Another of the languages I'm currently involved in is Georgian, which has its own alphabet which is a little exotic but has pretty good support in common fonts on Windows and *nix. But I am as yet unable to find a font with good Korean glyphs and also Georgian glyphs. My editor of choice is gVim, so an answer telling me how to set it to use two fonts together would be just as good. Currently I'm using it mostly under Windows 7 so a vim-specific solution would be needed rather than a *nix-specific solution.

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Configuring DNS and IIS for multiple domains on a single server

    - by RichardS
    I might be over complicating this but...I am hosting several websites and dns for the domains on a single server: domain1.net domain1.com domain2.net I have three items which I'm trying to work out whether to achieve by DNS, by IIS hostnames(bindings), or by IIS redirect. 1. Where I have domain1.net and domain1.com, I want everything from both (all emails and web requests) to just point to the domain1.net. Can I do this at the DNS level, or do I have to set up the email as forwarders on the email server and the domain as a hostname in IIS? For example: [email protected] [email protected] www.domain1.com www.domain1.net 2. I want to make sure that requests for domain1.net and www.domain1.net both resolve to the same place. Should this be done with DNS or with multiple hostnames, or with IIS redirects? 3. If I then want to have one webmail site serving all of domains (webmail.domain1.net, webmail.domain2.net), is it best to this with a cname in DNS or with host headers in IIS?

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • SQL Server replication and load balance

    - by Ahmed Galal
    I'm running a web service that serves a mobile app on IIS 8 and SQL Server 2014, my service has a massive load and i'm trying to improve performance, most of the load is happening on SQL. i don't think i have a bottleneck, my processor and ram is up to the max and i think my code is not that bad, am already using memcached and other stuff to avoid hitting SQL too much. i know i can always upgrade the server hardware but i already have a spare server that i would like to use, so i was thinking to split the SQL load on the 2 servers. What i was thinking of is to setup replication on the other server and do some load balancing, but am not sure how to do the load balance. I know i can adjust my code to hit the other server for some queries but i was hoping to find a solution that avoid changing my code. So my question is, What are the ways of doing load balancing between 2 SQL servers ? I would appreciate suggestions or best practices or some directions. Thanks.

    Read the article

  • Dynamic DNS Updates with Wireless and Wired interfaces

    - by Phaedrus
    We have offices full of Windows & Mac users who obtain IP addresses from a Windows DHCP server, which in turn updates Dynamic DNS entries. We are noticing major inconsistencies with the entries, and have found that the problem is occurring more on Macs than on windows, and even more when users are frequently switching from wired to wireless adapter, which makes sense, as this sequence occurs: User enables wired adapter and registers Proper DNS User enables wireless adapter and registers 2nd proper DNS entry user switches off wireless manually and 2nd entry remains improperly until scavenge. Our help desk folks rely heavily (maybe more than they should) on the dynamic entries as part of their business process. For example, the user submits a help desk ticket, and the staff member expects to be able to remote desktop to their machine by hostname, which is hyperlinked in the helpdesk ticketing app. We have implemented multiple solutions & band-aids to different symptoms of the problems such as: Using DNS Reservations for Macintosh PCs Using DNS Scavenging to remove old records Switching from a Cisco DHCP server to the Windows DHCP Server But no matter what we do, it seems impossible to maintain perfect records. Has anyone encountered this problem before? What is industry best practice? Comments & Suggestions are much appreciated, /P

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • Cutting Ubuntu to the bone for Virtualbox VM

    - by user32853
    I've been looking around for a Linux variant which will install only the software I need rather than everything Ubuntu (for example) puts in by default. This is to create a virtual machine in Virtualbox which has bash, apache, python, perl, SQLite, openssh and a few other programs but nothing else. I'd prefer to go with Ubuntu if possible but another modern distro would do as well (I like using apt-get and yum rather than downloading/compiling etc). So far, I've tried: SuseStudio.com, which is probably the best so far. Pressing F4 to get the boot options on Ubuntu 9.10, but there is no minimal installation (I think there was once). Arch Linux, slightly confusing install procedure but I might go back and try again. Gentoo, started well but fairly soon the HD on the virtual machine went to 2Gb, even before the installation had started in earnest (I'd partitioned the disks is all). I realise there are various "small" Linuxes around like Puppy, Feather, DSL, etc, but they seem to be aimed at desktop users or as a techie's toolkit, and I want a small-as-possible server distro which can be managed with tools like apt or yum or similar. TIA for any advice you can offer! -- Monty

    Read the article

  • Paid antivirus solutions for Windows

    - by AP Erebus
    NOTE: If your looking for recommendations on free antivirus, check this question: http://superuser.com/questions/2/free-antivirus-solutions-for-windows Much like the above, I'm curious to opinions on the best PAID antivirus solution, personal or commercial. Enterprise solutions are welcome and as much detail regarding costs is welcome. Personally I'm looking for a licence that will grant me more than 1 computer install and quality technical support, for personal use. as in the free antivirus question: See if your antivirus of choice is already listed. Chances are it is. If you spot an answer that mentions one you already use, vote that up if you think it's a good solution. If you know of a feature or drawback not listed, or can include experiences in dealing with it, please edit the answer accordingly. If you know of any that can also be used at work please point this out. This covers all Windows platforms from XP, Vista and Windows 7. If you see an existing entry that needs an update or to add your testimonial, please do.

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • What are the steps needed to set up and use security for AWS command line tools?

    - by chris
    I've been trying to set up the AWS command-line tools following Eric's most useful guide at http://alestic.com/2012/09/aws-command-line-tools. I can't seem to find a good how-to for how to generate the x509 certificate and private key, and how that relates to the various security files the guide creates. Update: I have found a couple of links that describe the some steps. These steps seem to work, however I'm not sure if this is secure & the best way to do it: 1) Create a private key openssl genrsa -out my-private-key.pem 2048 2) Create x.509 cert openssl req -new -x509 -key my-private-key.pem -out my-x509-cert.pem -days 365 Hit enter to accept all of the defaults. Then, from the IAM Dashboard, User, select a user & click on the "Security Credentials" tab. Click on "Manage Signing Certificates", then "Upload Signing Certificate", paste in the contents of my-x509-cert.pem, click OK and it should be accepted. One step that is discussed, but not required for me, was the addition and subsequent removal of a pass phrase on the private key. Should I have been prompted for one, and is my cert potentially unsafe because of this?

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • Decyphering Seagate drive model numbers?

    - by Stefan Lasiewski
    I'm comparing Seagate's Enterprise and Desktop drives for a variety of old and new servers. These servers come from different generations, so options like size (73GB, 2TB) and interface (SATA vs SAS 3.0Gbps vs SAS 6Gbps vs SCSI Ultra320) are widely variable. I'm trying to compare the sizes, speeds and interfaces, but I'm getting thrown off by different models. Also, their website is not the best. Does anyone know of a documented explanation of the Seagate model numbers? And is there a single spreadsheet which compares the features for all drives (or all 'Enterprise' drives?). Seagate drives have model numbers like this: Model ST3600057SS 6-Gb/s SAS 600 GB None at Cheetah® 15K Hard Drives Model ST373455LW Ultra320 SCSI 73.4 GB 68-Pin LW at Cheetah® 15K Hard Drives Model ST32000644NS SATA 3Gb/s 2 TB None at Constellation™ ES Hard Drives Model ST973452SS 6-Gb/s SAS 73 GB None Savvio® 15K Hard Drives Model ST9200011FS SATA 3Gb/s 200 GB Pulsar™ Solid State Drives I understand the model numbers read something like this: ST - SOMETHING1 - SIZE - SOMETHING2 - INTERFACE Where the fields mean something like this: ST : For 'Seagate'? 'Seagate Technoligies'? SOMETHING1 - This field has number, but I'm not sure what that represents. SIZE - Size in Gigabytes. This is a number like '73' or '300' or '2000' SOMETHING2 - This field also has a number, but I'm not sure what it means. INTERFACE - This field seems to indicate the Interface. 'SS' means SAS, 'FC' means Fibre Channel, but I don't see how to distinguish between 6Gbps SAS and 3Gbps SAS, or different SATA or FC speeds. I don't see a field which indicates the RPM (15K , 10K, 7.2K) etc. Is this part of the model number?

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin [email protected] DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

< Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >