Search Results

Search found 15160 results on 607 pages for 'boot disk'.

Page 547/607 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • external drive and CentOS - Reset high speed USB device number

    - by Phil
    I have 2 external drives (3TB) and both will not work with my centOS Box. Tested them in windows ( different machine ) No problems ( 2.6.32-279.9.1.el6.i686 ) dmesg reports: usb 2-2: new high speed USB device number 3 using ehci_hcd usb 2-2: New USB device found, idVendor=2109, idProduct=0700 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-2: Product: USB 3.0 SATA Bridge usb 2-2: Manufacturer: VIA Labs, Inc. usb 2-2: SerialNumber: 0000000000006121 usb 2-2: configuration #1 chosen from 1 choice scsi6 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 3 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access ST3000DM 001-9YN166 CC4B PQ: 0 ANSI: 2 sd 6:0:0:0: Attached scsi generic sg3 type 0 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] 5860533165 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 6:0:0:0: [sdd] Write Protect is off sd 6:0:0:0: [sdd] Mode Sense: 00 06 00 00 sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sdd: sdd1 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Attached SCSI disk Tyring to use cfdisk / fdisk / gdisk or even fdisk -l results in the program hanging and dmesg reports: usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd I have the same 2 drives physically installed in the computer via SATA Any Ideas?

    Read the article

  • How to set up RAID-0 first time on new PC?

    - by jasondavis
    I have built basic PC's in the past but have never used a RAID array at all. SO now I am buying parts to build my new PC, it will be an intel i7 processor. My motherboard will have RAID support which I will use instead of an aftermarket raid controller for now. Also I plan to use 2 SSD drives in RAID-0 for my windows 7 OS. (Please note that I am aware of the issues with doing this, including lack of TRIM support when using RAID with SSD drives. I am OK with it not working as I can just re[place the drives in a year or so or wheneer they become more sluggish). SO here is my question part. If I assemble the motherboard, PSU, processor, RAM, vidm card, etc and then go to turn the PC on, it will have the 2 SSD drives hooked up. so I assume I will then soon the BIOS screen before I install windows? How to I go about making the 2 drives work in RAID-0 at this point? I do the raid part before installing my OS right? Please help with the steps involved from assembling the parts of the PC and then turning it on, to the part of getting the RAID-0 set up between the 2 drives and then installing my windows 7 OS from a Optical drive? Please help, all advice, instructions, tips appreciated as long as on topic. I do not need to be told that this is a bad idea as far as if 1 drive fails I losse it all, I plan on having a disk IMAGE to be able to restore my OS and software to a new set of drives at anytime needed in the event of drive failure. Same goes for lack of TRIM support. Thanks for reading and help =)

    Read the article

  • Configure samba server for Unix group

    - by Bird Jaguar IV
    I'm trying to set up a samba server with access for users in the Linux (RHEL 6) "wheel" group. I am basing smb.conf off of the example here where it goes through the [accounting] example. In my smb.conf I have [tmp] comment = temporary files path = /var/share valid users = @wheel read only = No create mask = 0664 directory mask = 02777 max connections = 0 (rest of the output from $ testparm /etc/samba/smb.conf is here). And groups `whoami` returns user01 : wheel. When I use the following command from another machine (Mac OS) as the Linux user (user01): $ smbclient -L NETBIOSNAME/tmp it asks for a password, I hit return without a password, and get: Enter user01's password: Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.6.9-151.el6_4.1] Sharename Type Comment --------- ---- ------- tmp Disk temporary files IPC$ IPC IPC Service (Samba Server Version 3.6.9-151.el6_4.1) But when I try $ smbclient //NETBIOSNAME/tmp I try entering the password I use for the Linux login, and get a bunch of stuff logged, including check_sam_security: Couldn't find user 'user01' in passdb. ... session setup failed: NT_STATUS_LOGON_FAILURE (I can give more logging information if it would be helpful.) I can't find a reference to more steps I need to add group users in the resource. Should I be manually adding samba users from the group somehow? Thank you

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • Why might my Fedora 15 live USB persistent storage not work?

    - by Richard J Foster
    I created a Fedora 15 "live" USB stick using the live USB creator found at https://fedorahosted.org/liveusb-creator/ and the Fedora 15 i686 Desktop ISO image with the persistent storage space set to 4096MB. (The USB stick I have available has an 8GB capacity, so there should be plenty of space.) Fedora appears to boot correctly, however it seems that the persistent storage is not working. To verify this, I opened a terminal prompt, then did su - followed by yum update yum. As expected, I was informed that a new version was available. (The live CD contains version 3.2.29-4, at the time of typing 3.2.29-6 is the current version). After installing, I verified that the new version was installed by typing yum --version. I then shutdown the system using shutdown now. After the system had shut down, I rebooted and returned to the terminal prompt. On typing yum --version, I was informed that the version was 3.2.29-4 (i.e. the original version). Why might the persistent storage not be working? Is there anything I can do to fix it?

    Read the article

  • Varnish, Nginx, Apache, APC, Meteor, Cpanel & Wordpress On A Single Server, Any Good?

    - by Aahan
    Yes, I have read many close questions, but I needed a specific answer and hence this question. First, these are my new server specifications: Linux Server (CentOS), Intel Xeon 3470 Quad Core (2.93GHz x 4) processor, 4 GB DDR3 Memory, 1TB Hard Disk Space, 10 TB Bandwidth and 9 Dedicated IPs. AIM: To speed up my wordpress blog + Increase server's capacity to handle heavy load PLAN: This is how I am planning to setup my server - - VARNISH (in the front, to cache server responses) NGINX (to effectively handle static content & overcome the C10k problem) APACHE (behind Nginx, to effectively deliver dynamic content) APC (PHP page, database & object caching) CPANEL (which requires Apache, and I require it) WORDPRESS W3 TOTAL CACHE (caching plugin for Wordpress). So , will the setup work? Have anyone tried it? Please shower your thoughts and knowledge. NOTE: I can't do without Apache because I am used to that .htaccess & Cpanel stuff. So, it's not any option. All others are options. Please try to help. I hope I am clear in what I wanted to ask.

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • which vista services can be disabled with impunity?

    - by GwenKillerby
    I use Vista on a HP pavilion DV2 laptop. When I look through all the services my laptop starts, it really seems there's way too much of it. I multi boot with XP and 7. Both startup in 40 seconds. Vista takes four minutes. Is there some software that can determine which services I don't need? On 7, there's no propietary HP stuff at all, yet it seems to run fine. Because all these service, there's a LOT of them and some just sit there doing nothing, monitoring for updates I don't really need or want or need to know about the second they're available. my laptop is the only computer i use at home, there's no home network, aside from the modem-router, which is cabled, not wifi. Take for instance Parental controls, and stuff for people with bad eyesight, Tablet PC. I really never use any of that stuff. Hope this question is specific enough. I've looked at the other questions but they didn't answer me. thanks, Gwen.

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • How to serve media across home network?

    - by TK Kocheran
    I'm looking to share my media across my home network. Router fully supports running a DLNA server, but I don't know if it'd be better to run the server from my main server computer instead of from the router, as the router would have to operate off of a network share and my server can operate directly off of the files. Here's what I need to serve, in order of importance: ISO 1:1 DVD rips (4-8GB files), MP4/H.264 encoded videos, MKV videos, MP3 files, JPEG/CR2 images. Maybe I'm completely ludicrous for wanting to push full DVD files across my network, but in reality, I would assume that only the parts of the actual file needed (ie: menu, main video payload for main title) would be served at any one time. Plus, encoding takes time and precious disk space, so why not stream it 1:1 ;) Does anyone know of the best way to accomplish this? Main goal is to serve it to Logitech Revue downstairs and secondary goal is to serve it to other computers in the house. For music, I assume I could run a DAAP server, but I don't think that the Revue supports that (and I can't exactly throw together an app that does it just yet).

    Read the article

  • KeePass lost password and/or corruption due to Dropbox/KeePassX

    - by GummiV
    I started using Keepass about a month ago to hold my passwords and online accounts info. Everything was stored in a single .kdb file, only protected with a password. I'm using Windows 7. Now Keepass can't open my .kdb file with the error "Invalid/wrong key". I'm fairly confident I have the right password. Altough I might have mixed up a few letters I've tried about two dozen different combinations to minimize that possibility - but can't rule it out though. My guess is however that the .kdb file got corrupted, either due to Dropbox syncing (only using it on one computer though) or because I edited the file using KeePassX on Ubuntu (dual boot on the same computer, accessing a mounted Win7 NTFS partition), or possibly a combination of both. I have tried restoring older versions(even the original one) from Dropbox and trying out all possible passwords without any luck. (which does seem to rule out KeePassX as the culprit, since oldest copies are before I edited the file from Ubuntu) I have tried opening the file with the "Repair KeePass Database file" which always gives the "0xA Invalid/corrupt file structure" (the same error for when a wrong password is typed). I was wondering if there was any way for me to salvage my hard-gathered data. I know generally that brute force cracking is not feasible, but since I can remember probably more than half of the usernames/passwords, any maybe the fact that one of them does come up fairly often (my go-to pass for trivial stuff), that might simplify the brute force process to a doable time frame. Maybe the brute-force thing might incorporate the fact that I know the password length and what characters it's made from. (If we assume corruption, not a password-blackout on my part) I could do some programming if there are any libraries or routines that I could use. Other people seem to have had a similar probem http://forums.dropbox.com/topic.php?id=6199 http://forums.dropbox.com/topic.php?id=9139 http://www.keepassx.org/forum/viewtopic.php?t=1967&f=1 So hopefully this question will become a suitible resource for people when searching the web. Feel free to tell me if you think this should rather be a community wiki.

    Read the article

  • Windows 8 x64 with VMWare Workstation or inside ESXi

    - by Dommer
    I need to run several virtual machines on a core i7-920 box with 12GB or RAM and a 256GB SSD to host the VMs. It also has a Highpoint RocketRaid 2720SGL RAID controller with a 12TB RAID 5 array. I want one of my VMs to run Windows 8 x64, to have access to the RAID array as a native disk (not as networked drives and it needs to run at full speed) and to be able to send files quickly across the network. Initially I thought I'd try to do this using ESXi 5, but I have been unable to find any working RAID drivers for the RR2720SGL and it is not on the HCL for ESXi 5. In light of this, I have installed Windows 8 x64 on the hardware and am thinking of installing VMWare Workstation and running my VMs inside there. I guess my questions are these: How does VMWare Workstation 9 perform compared to ESXi 5? In the real world I mean? Presumably installing Win 8 as the host OS will give me way better performance for that Win 8 machine than Win 8 running under ESXi? I should stick with Windows 8 x64 as the host OS, right? If I install a domain controller VM inside my Win 8 box and join the Win 8 machine to that domain, am I insane (I would guess the Win 8 machine wouldn't see the domain controller until it finished starting everything up, but I don't think that matters)?! is it feasible to give metrics like this and if so, what is the likely value of x? 25%? 50%? 75%? Win 8 under ESXi runs x% as fast as Win 8 installed bare metal.

    Read the article

  • SSH session becomes unresponsive when logged into Ubuntu Server virtual machine using VirtualBox

    - by nickbart
    Hi everyone, I'm really at my wits end here, so I'm hoping someone here can help me. I have a virtual machine running Ubuntu Server 9.10. It's just a small development environment so I can keep my code separate from the test and production environments. I am running it through VirtualBox 3.1.6 on a laptop running Ubuntu Desktop 9.10. I have it set up with a bridged network connection and it is bridged to my laptop's wireless adapter. We have no wired connections in this office. I boot up the VM and everything is fine. I can SSH into it using gnome-terminal and for a while everything is Kosher. Then seemingly randomly, the SSH terminal session with hang. No error message, nothing; it just becomes unresponsive. If I go to the VirtualBox terminal I find the VM itself is perfectly fine. It can ping and I can SSH out with it. If I restart the networking on the VM the SSH session in my gnome-terminal will most of the time become responsive again. Here's an interesting point, the SSH session will sometimes die right in the middle of me typing something (this points to it not being an idle session issue) and if I go to the VirtualBox terminal and restart the networking and then return to my gnome-terminal SSH session I find that it will come back to life and what I typed when the session hung originally will magically type itself in to the buffer. So, my input is getting stored somewhere and just can't make its way to the VM until the networking on the VM is restarted. I've tried different versions of VirtualBox and used vmdk images and vdi images and nothing seems to work. I can't tell if the problem is with my laptop, VirtualBox, or the Ubuntu Server VDI. Is there anyway to debug this issue? Or has anyone out there seen anything similar? Your help is much appreciated. Nick

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • SharePoint, Exchange and Incoming Emails Without Directory Management Services

    - by Nariman
    Trying to keep this as simple as possible. We've already created the email accounts that we need (e.g. account[1-20]@domain.com) on Exchange/AD. We'd like to now enable incoming emails on SharePoint 2007 lists corresponding to these accounts. My thinking is we don’t need to configure Directory Management Services [2] – the architecture will be simpler without it and the application doesn’t require these services. However, we still need to route messages from Exchange to either local SMTP services (via the connector described in the articles below) or by user-specific drop-folder settings (if permitted by Exchange). So the question is: can we instruct Exchange to use a drop folder just for accounts account[1-20]@domain.com? or do we need to change the accounts to account[1-20]@sharepointsmtp.domain.com and re-route those message to the local SMTP service that will drop them on disk? I've read the material below. [1] - http://www.combined-knowledge.com/Downloads/2007/How%20to%20configure%20Email%20Enabled%20Lists%20in%20Moss2007%20RTM%20using%20Exchange%202007.pdf http://social.msdn.microsoft.com/Forums/en/sharepointdevelopment/thread/91e0c3d2-afe6-469d-b1bc-6ae7a9aa287e http://gj80blogtech.blogspot.com/2009/12/configure-incoming-email-setting-in.html http://www.jasonslater.co.uk/2007/08/10/configuring-incoming-mail-on-moss-2007-and-exchange-2007/ http://technet.microsoft.com/en-us/library/cc262947%28office.12%29.aspx http://technet.microsoft.com/en-us/library/cc263260%28office.12%29.aspx [2] – http://graycloud.com/sharepoint/incoming-mail-configuration-what-permissions-are-require-t39483.html

    Read the article

  • Attaching 3.5" desktop drive to MacBook SATA

    - by Kyle Cronin
    I have a mid-2007 MacBook that, according to the Apple Store, has suffered some liquid damage and requires a new logic board to operate correctly, a ~$750 repair I've been told (would normally be around ~$300 were it not for the "liquid damage"). The unit itself works fine - the only problem I've been having is that the system does not recognize the battery and will not charge it. Curiously, the system can still be powered by the battery and even recognizes when the power cord is detached by diming the backlight, but I digress. Now that this laptop will likely become a desktop, I'm wondering if it might be possible to attach a desktop drive. I recently purchased a 2TB SATA drive and I'm wondering if it's possible to somehow attach it where the current internal drive connects. Obviously the drive itself will not fit inside the device, but as the unit will spend the rest of its days on my desk, that's not really much of an issue. My main questions are: Is this possible? If so, how would I connect the drive? Would a SATA extender cable work? Is the SATA port on my MacBook capable of powering a desktop drive? Or should I just get a SATA male-to-female cable and see if I can power the drive through other means (a cheap power supply, for example) The disk I'm referring to is the Hitachi Deskstar HD32000. Though I couldn't find that exact model on Hitachi's support site, these are the power requirements for a similar drive, the 7K2000 (2TB, 7200RPM, SATA II): Power Requirement +5 VDC (+/-5%) +12 VDC (+/-10%) Startup current (A, max.) 1.2 (+5V), 2.0 (+12V) Idle (W) 7.5 From what I've read, 2.5" drives require 5V, meaning that my MacBook obviously is capable of producing it. The specs seem to suggest that this drive seems capable of accepting it instead of the typical 12V - is this an accurate interpretation of the power requirements? Or does it need both 12V and 5V?

    Read the article

  • I've just set up FreeBSD 8.0 and can't login with ssh

    - by Matt
    /etc/hosts.allow is set to allow any protocol from anywhere. I can "ssh localhost" and it works. I simply get "connection refused" from putty on another machine. Any ideas? Will try to get a copy of the sshd_server.conf file as soon as I can find a flash disk to copy it to, but I thought someone might know what you need to set initially to permit login. EDIT: I think I can see why it's not working now. If I telnet to the IP address of the server I'm seeing MGE UPS SYSTEMS SNMP Web/Agent configuration menu. Enter Password: Doh. Ok, so the IP address is assigned by DHCP, but it seems there is already a device statically assigned to that address. I'll put in a reservation and try again. ok, sorted now. It was an ip address conflict. Windows DHCP isn't smart enough to check if there is something listening on the address before first assigning it.

    Read the article

  • Using my own Postfix, filtering spam and getting all the mail into my ISP's inbox

    - by djechelon
    Hello, I currently own a domain bought via GoDaddy.com, which provides me a basic email setup for the most common needs. I configured it to forward all mail to [email protected] to [email protected]. I also own a virtual server with a running Postfix that I use for a specific website (all mail to somedomain.com gets forwarded via LMTP to a program written by me). Since I'm recently experiencing some harassing by spammers, since GoDaddy doesn't seem to filter spam, and since my Windows Phone's Pocket Outlook cannot filter spam, I would like to use SpamAssassin to filter inbound spam by changing my domain's MX records to my server My ideal setup is the following: All mail delivered to somedomain.com gets redirected via LMTP as usual via virtual transport without any spam check All mail to [email protected] gets redirected to [email protected] after a severe spam check I don't care about [email protected] since I use just one address for now I would like to train SpamAssassin with customized spam rules, possibly based on the presence of certain keywords (links to certain unsubscribe pages I found recurring) I currently configured Postfix with transport somedomain.com lmtp:[127.0.0.1]:8025 .somedomain.com error: Cannot accept mail for this domain relay somedomain.com OK (I guess I should add mydomain.com OK too) virtual @mydomain.com [email protected] (looks like a catch-all rule, it's OK as requirement 3) I installed SpamAssassin, I can do rcspamd start and set it to boot with the server, but I don't know if there is anything else to do for use in Postfix, and how to apply requirement 1 (only mail to mydomain.com gets filtered) I also tried to send an email via Telnet to make sure my settings are ready for MX change. I received the message into my account but I found that it gone through secureserver.net, like Postfix didn't rewrite the destination but simply relayed the message. Thank you in advance. I'm no expert in SpamAssassin, and I have little experience in Postfix (enough to avoid making my server an open relay)

    Read the article

  • Having Trouble Ripping Some CD's

    - by James
    Hi, When I buy CD's I tend to rip them to FLAC right away. When ripping I use Foobar2000 or Exact Audio Copy and enable secure ripping which uses error correction. Recently I bought a 2 CD compilation album brand new but when I tried to rip the second CD on my laptop using Foobar2000 it struggled with the last 2 tracks and was unable to finish. EAC was also unable to get an accurate rip and reports read errors. Ripping in fast mode results in audible errors in the output track. I have tried another computer and having similar problems. I cannot see any damage to the disc and it has not been dropped or anything. The weird thing is that I had similar problems with a different album and different PC a while back. This other CD was a compilation disk so it was also right up to the CD capacity limit and again it was the last few tracks that would not rip. Dozens of other discs have ripped fine So I am wondering if the CD is simply defective, or whether it is something else. How common are defective CD's? Do some CD drives struggle with CD's of this capacity? Or Is this some kind of copy protection? I'm thinking of asking Amazon for a replacement but it would be annoying if I get the same problem again.

    Read the article

  • Windows File Access Denied

    - by Tom
    I seem to have a general problem with "access denied on Windows". It manifests itself every time if e.g: My bat file calls a compiler creates a file on disk My bat file renames a file But I also have files downloaded (FireFox) to Windows desktop where Windows is giving me "access denied" if I try delete the file. Tried disable AVG + make exception in AVG resident shield (I have tried checking with Task Manager + Winternals process explorer that it is not process running still running that should cause the locks.) Windows 7. My user account is an administrator. All files are created by same user account. The problem is recent, but some things I first noticed yesterday (when I started calling .bat files again which I have used for many years) I have tried: Starting e.g. Windows Explorer with "run as administrator", but that makes no difference right-click - properties - security and changes permissions/ownership (I also get "access denied" when trying this so this does not help) Here is a ascreenshot if I try change security of a "locked" file. (The problem here is the locking occurs continously every time the file is created) ! If I click on, it states I am not the owner? Which baffles me as I just created it. (Yes, through a .bat file calling executables that create the file. But all running under my administrator user account. Interestingly after having this dialog open, the file somehow sometimes suddenly seem to allow me delete it)

    Read the article

  • Window 7 Computer name changing on its own?

    - by DC
    Very odd problem... I have a Dell Latitude D830 with XP Pro that has been running on my local domain for many years. I recently Installed Windows 7 Enterprise on the D830 using a brand new HDD so that I could still use XP if I needed by just swapping out the HDD's. I added the W7 installed system to my domain using a completely different machine name than that used for the XP system and everything seemed to be functioning as it should. On boot up over the last 2 weeks or so I occasionally (3 times now) get to the login screen and try to login to the domain only to get an error saying that the Computer name is not a trusted machine in the domain I'm trying to log in to. Come to find out that the machine name on the W7 system has been changed somehow to that of my old XP system. If on the W7 system I then change the name back to the correct name, disjoin the domain, reboot, add the machine back into the domain … all is well for an unknown period of time until this happens again. This last time, I know for a fact that everything was fine the day before when I shut down the system. I came in today, powered up the system and the machine name had been changed to that of my old XP system again. Has anybody else seen this behavior or hav any ideas on what could be causing it? Thanks!

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • keyboard intermittently stops working, even after reinstalling windows 7; possibly a Chrome issue?

    - by neverskipbreakfast
    My keyboard intermittently stops working. Sometimes a couple of keys will work, but usually none. Sometimes if I mash a couple of the ctrl+alt+windows type keys randomly for a bit, the keyboard will let me type one more letter before stopping again. Sometimes the keys will open a program menu, but usually not. I have even completely wiped my machine and reinstalled windows 7; the problem continues. Specs: Intel iMac (early 2006, 2.0GHz, 2MB RAM, 240GB HD) running ONLY Windows 7 Professional, 32-bit (NOT through boot camp) and using a USB keyboard (Saitek Eclipse II.) *Unplugging & reconnecting keyboard does NOT fix it. *Connecting a different keyboard does NOT fix it. That one won't work, either. *Drivers are up-to-date. Removing and reinstalling drivers does NOT fix it. *Restarting the computer does NOT fix it. In fact, when the Windows logon screen appears, they keyboard won't work, and neither will the icon to pull up the on-screen keyboard. Otherwise my mouse can click around just fine. I can only log onto a non-password protected account. *Generally, logging into as different Windows user fixes it. I can then log back on to my main user account and continue work for a few hours until it happens again. *Clearing my Chrome browsing data stopped the problem from recurring for a week or so. *I use Avira free antivirus software, and repeated scans turn up nothing fishy. *I have already REINSTALLED Windows 7 (not just a restore.) The problem returned after 2 days of use. The only thing I suspect is something in Google Chrome -- I used my google account to reload all my previous Chrome extensions, saved data, etc. (Chrome Extensions Installed: AdBlock, Better Google Tasks, DropBox, FB Photo Zoom, Google Mail Checker, StayFocusd.) Any ideas? Any at all?

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >