Search Results

Search found 43978 results on 1760 pages for 'select case'.

Page 1608/1760 | < Previous Page | 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615  | Next Page >

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Windows 7 x64 "upgrade" repair fails

    - by Polynomial
    I've been running into issues with Windows Update, which I can't seem to fix. The hotfixes don't work, nor does the Windows update readyness tool, or the manual SP1 upgrade. I get various esoteric errors which nobody seems to have a fix for. Looks like some of the update cache is corrupt and digital signatures seem to be broken on some packages / Windows Update components. Long story short, I have discovered the only option is to do a repair operation on the OS, to repair everything. It's so corrupt that only a complete replacement will fix it. According to various sources (including MSKB) one can perform a repair by running an in-place upgrade. I've got the Windows 7 Ultimate retail disc, which I've inserted into my machine. I ran setup.exe and went through in the following order: Install now Go online to get latest updates (I've also tried not getting updates) Wait for updates to be downloaded Select Windows 7 Ultimate (x64 architecture) and click next Accept the T&Cs, click next Click Upgrade At this point it spends a minute on the "checking compatibility" screen, after which I get the following error: The following issues are preventing Windows from upgrading. Cancel the upgrade, complete each task, and then restart the upgrade to continue. You can’t upgrade 64-bit Windows to a 32-bit version of Windows. To upgrade, obtain a 64-bit version of the installation disc, or go online to see how to install Windows 7 and keep your files and settings. 32-bit Windows cannot be upgraded to a 64-bit version of Windows. To upgrade, obtain a 32-bit version of the Windows installation disc. It also mentions a warning about potential conflicts with a storage driver and VS2010, but that doesn't seem to be the blocking issue. My currently installed version of Windows is Ultimate 64-bit (absolutely sure of this) and the disc is definitely a x86 / x64 combined Ultimate retail disc. There seem to be a few people who have run into this (e.g. this question), but I've not seen any answers. I've checked the event viewer, but can't spot anything in there that's related. Any idea how I can get this working? P.S: Just to pre-empt the inevitable "are you suuuuuuuuuuuuure it's x64 Ultimate?" questions:

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Windows 2008 R2 CA and auto-enrollment: how to get rid of >100,000 issued certificates?

    - by HopelessN00b
    The basic problem I'm having is that I have 100,000 useless machine certificates cluttering up my CA, and I'd like to delete them, without deleting all certs, or time jumping the server ahead, and invalidating some of the useful certs on there. This came about as a result of accepting a couple defaults with our Enterprise Root CA (2008 R2) and using a GPO to auto-enroll client machines for certificates to allow 802.1x authentication to our corporate wireless network. Turns out that the default Computer (Machine) Certificate Template will happily allow machines to re-enroll instead of directing them to use the certificate they already have. This is creating a number of problems for the guy (me) who was hoping to use the Certificate Authority as more than a log of every time a workstation's been rebooted. (The scroll bar on the side is lying, if you drag it to the bottom, the screen pauses and loads the next few dozen certs.) Does anyone know how to DELETE 100,000 or so time-valid, existing certificates from a Windows Server 2008R2 CA? When I go to delete a certificate now, now, I get an error that it cannot be delete because it's still valid. So, ideally, some way to temporarily bypass that error, as Mark Henderson's provided a way to delete the certificates with a script once that hurdle is cleared. (Revoking them is not an option, as that just moves them to Revoked Certificates, which we need to be able to view, and they can't be deleted from the revoked "folder" either.) Update: I tried the site @MarkHenderson linked, which is promising, and offers much better certificate manageability, buts still doesn't quite get there. The rub in my case seems to be that the certificates are still "time-valid," (not yet expired) so the CA doesn't want to let them be deleted from existence, and this applies to revoked certs as well, so revoking them all and then deleting them won't work either. I've also found this technet blog with my Google-Fu, but unfortunately, they seemed to only have to delete a very large number of certificate requests, not actual certificates. Finally, for now, time jumping the CA forward so the certificates I want to get rid of expire, and therefore can be deleted with the tools at the site Mark linked is not a great option, as would expire a number of valid certificates we use that have to be manually issued. So it's a better option than rebuilding the CA, but not a great one.

    Read the article

  • Windows Server 2008 - one MAC Address, assign multiple external IP's to VirtualBoxes running as guests on host

    - by Sise
    Couldn't find any help @ google or here. The scenario: Windows Server 2008 Std x64 on i7-975, 12 GB RAM. The server is running in a data centre. One hardware NIC - RealTek PCIe GBE - one MAC Address. The data centre provides us 4 static external IP's. The first is assigned to the host by default of course. I have ordered all 4 IP's, the data centre can assign the available IP's to the physical MAC address of the given NIC only. This means one NIC, one MAC Address, 4 IP's. Everything works fine so far. Now, what I would like to have: Installed VirtualBox with 1-3 guests running, each gets it's own external IP assigned. Each of it should be an standalone Win Server 2008. It looks like the easiest way would be to put the guests into an virtual subnet and routing all data coming to the 2nd till 4th external IP through to this guests using there subnet IP's. I have been through the VirtualBox User Manuel regarding networking. What's not working: I can't use bridged networking without anything else, because the IP's are assigned to the one MAC address only. I can't use NAT networking because it does not allow access from outside or the host to the guest. I do not wanna use port forwarding. Host-only networking itself would not allow internet access, by sharing the default internet connection of the host, internet is granted from the guest to the outside but not from outside or the host to the guest. InternalNetworking is not really an option here. What I have tried is to create an additional MS Loopback adapter for a routed subnet, where the Vbox guests are in, now the idea was to NAT the internet connection to the loopback 'subnet'. But I can't ping the gateway from the guests. By using route command in the command shell or RRAS (static route, NAT) I didn't get there as well. Solutions like the following do work for the one way, but not for the way back: For your situation, it might be best to use the Host-Only adapter for ICS. Go to the preferences of VB itself and select network. There you can change the configuration for the interface. Set the IP address to 192.168.0.1, netmask 255.255.255.0. Disable the DHCP server if it isn't already and that's it. Now the Guest should get an IP from Windows itself and be able to get onto the internet, while you can also access the Host. Slowly I'm pretty stucked with this topic. There is a possibility I've just overlooked something or just didn't getting it by trying, especially using RRAS, but it's kinda hard to find useful howto's or something in the web. Thanks in advance! Best regards, Simon

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • getting Internet connection sharing working in a slightly more complicated configuration

    - by tirichitirca t
    I have the following configuration: Computer A - Mac OSX 10.8.4, wireless & wired adapters Computer B - Windows 7 (64 bit), wireless & wired adapters, has internet connection via the wired adapter (ethernet) d-link wired/wireless router. Problem to solve: Connect from computer A to the internet through the wired connection of computer B. I tried the following: I set up a local network between A and B using the d-link router. The configuration is this: D-link router - 192.168.0.1 A - wired connection to the d-link router, static 192.168.0.101 (I could have used the wireless but I preferred the wired connection) B - wireless connection to the d-link router DHCP 192.168.0.102 (but I made sure it always gets the same address) B - wired connection to the internet using some address that begins with 10.x.y.z. In this configuration A can see B. I enabled ICS on the wired adapter of B. I set up the Gateway of A to point to B and DNS servers to point to the DNS servers specified for the 10.x.y.z address. It doesn't work, A goes only as far as B. It can ping the 10.x.y.z address of B though. I then found this article: http://terrybritton.com/windows-internet-connection-sharing-ics-not-working-with-linux-bridging-is-the-solution-916/. Terry is suggesting that a bridge should be defined on B between the two connections. I tried that but basically computer B is screwed as soon as I create the bridge. It can't connect to the internet anymore. It is as if the network bridge seems to think the traffic to the internet should go from the wired connection to the wireless and not the other way around. The other thing that puzzles me is the router itself. In general the router needs an internet address. In a normal configuration it is the router that gets the ip address and the internet traffic goes through the router. In my case I am not interested in that. So, any suggestions to get this working? I wouldn't shy away from using a commercial software but I would think windows 7 should allow me to do it. Thanks

    Read the article

  • How can Windows XP/7 users cleanly connect to Mac OS X Server 10.9.4 Mavericks with Active Directory integration?

    - by JakeGould
    I’m a Linux/Unix systems admin who also manages a Macintosh server infrastructure & there is a lone Mac Mini in the mix running 10.9.4 that I would like Windows XP & Windows 7 users to connect to with little or no hassle. The problem? Windows users can’t seem to even get to the point of a password prompt yet connect. Mind you this server replaced a Mac OS X 10.6.8 server that had issues, but never had issues with Windows users connected. The gist of this post is: The tons of different messages out there about Mac OS X 10.9.4 Samba support are mind-numbingly confusing. Can anyone share some solid specifics here? I’ve read pieces like this one here that suggest turning off file sharing & then adding a share with AFP/SMB enabled would work. But the suggestion seems to apply to 10.8. And from what I know a lot has changed in Samba support in 10.9 let alone the iterations to 10.9.4. Then I found this great tutorial here that explains things step-by-step. Which seems like it should work, but the problem is the example given applies to a local user created on the Mac when I would like users in an Active Directory group—which the Mac is bound to—access the Mac Mini shares. There are also tons of great tips here on MacWindows.com but nothing seems solid to the issue I am facing. So from what I am reading these are my options: Local User Versus Active Directory: Setup a common local user on the Mac OS X 10.9.4 server to be used for Samba sharing since Active Directory won’t work. Is this really the case? Because loss of AD integration is a major pain. Do Extended File Attributes Get Retained from Windows Users: If this were to work, how do extended attributes come into play? Loss of metadata & related info is not an option. How Fragile is Any of this to Updates: How does any of this shake out with Mac OS X updates as well as Windows updates? Installing Official, Open Source Samba: Would upgrading the Samba install on the server to the official open source Samba via a package like SMBUp or via the Hombrew method described here help or make the issue worse? I fully understand there have historically been issues in mixed environments, but nowadays Windows users connecting to a Mac seem to have a truly hellish road ahead of them. Unless I am missing something?

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • Loose component cables causing HDMI video problems

    - by jwir3
    I'm not sure this is the correct forum, but I'll ask anyway. I have an A/V setup at home that has something like the following: Five Components (actually a few more, like a CD player, but they don't really relate to this question): Older Pioneer Receiver Digital Set Top Box Sony BluRay Player Samsung Plasma TV Speakers The reason for the receiver is so that all the sound can go through the speakers, rather than some going to the TV speakers and some to the external speakers. They are connected as follows: Digital Set Top Box connects via component video to Samsung TV directly via Component 2 (audio goes to Older Pioneer Receiver). Sony BluRay player is connected via HDMI 1 to TV, but audio goes to the receiver. Now, the problem I'm having is that when I have the digital set top box connected, there are times when the Netflix or Hulu streams I watch through the Sony BluRay player (it's connected to a router for internet access) will lose video. What I mean by this is that the sound of the episode will keep playing, but the screen will go black. If I jiggle the component cables, it will often come back. If I disconnect the component cables, it will always come back. I've noticed that one of the connections (the red component cable) doesn't like to sit very well in the component socket in the back of the digital set top box. It seems like there is a bad connection here, but it doesn't seem like this should be affecting the HDMI input at all. What I've noticed, though, is that when I disconnect the digital set top box completely (i.e. remove the component cable from the back of the TV), the problem seems to resolve itself. I'm not talking about actually removing the cable physically, because I thought perhaps the cables were mashing against one another, and possibly jiggling each other loose. To correct this possible problem, I took the component cable completely out of the cable ties it was in in the back of my entertainment center, as well as pulled the digital set top box out from the entertainment center altogether. It's now connected directly to the TV, without any other cables touching it to cause some kind of weird interference or just physical pulling on the cable. Same problem. If, however, I disconnect the component cable and just leave it sitting behind the TV, then the problem goes away. So, my question is this - what could be causing this? Is it a case where it's an improperly shielded component cable that's causing interference with the HDMI input, or something that's wrong with the TV? It's an intermittent problem, so it's difficult to track down. The TV isn't that old, so it's probably still under warranty. I'm just wondering if there is something else I can do that might reduce this problem without having to haul a massive television set out of my house to get repaired/replaced.

    Read the article

  • Share 3G connection over WiFi-LAN network

    - by kush.impetus
    This is how I have established network between my PC and my laptop at home (being novice in networking, it took me few days to achieve the feat). And it is working perfectly. I can easily share files between them. Laptop IP Address: 192.168.1.4 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 Desktop IP Address: 192.168.1.5 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 ASUS RT-N10+ Router IP Address: 192.168.1.4 Default Gateway: 192.168.1.2 I have connected the Desktop PC to the router using a LAN cable, and laptop to router over WiFi. Both, PC and laptop are running on Windows 7 OS, are on same HomeGroup, have same username / password. Also, I have connected the Ethernet cable to LAN port 1 of the router. Click here to view a graphical representation of the network. Can't post image here, because I don't have 10 reputation points. Now, what I want is use connect to Internet using a 3G USB modem on one device and share it over the network on the other. I tried Huawei and Micromax 3G USB modem. Both obtain a new IP address whenever I connect to Internet (means they have dynamic IPs). Rest, both have Subnet Mask as 255.255.255.255 and Default Gateway as 0.0.0.0. In that case, I cannot directly share Internet from the modem. Preferred DNS is blank for now in both, laptop and PC. What I am planning to do is to connect to Internet on laptop using the 3G modem and share the Internet connection over laptop's Wi-Fi (as hotspot) using Connectify, which I have done already. That, I suppose, will broadcast a static IP to connect to. Now what I can't figure out is that what changes should I make to the network settings of the router and the PC so that PC connects to the Internet broadcast by Connectify? Is that possible on the first hand? Please note that I am trying to implement the network without spending anything extra (for purchasing as USB WiFi adapter for PC, of course, which could have made the life lot easier for me). Thanks in advance

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • VPN - force a selective range of ip to run on VPN (linux)

    - by Francesco
    Preface: I know there are similar question here and there however I'm a kind of newbie on Net stuff so I need an answer on this specific scenario, hoping that can help others too as it is a common problem Let say I cannot do anything on the local switch to change the local ip range, I don't want to use any complicate trick as use VMachine to hide the local ip range but I want to use net tools to solve the issue. Scenario my local net assign me an IP of this class 192.168.1.xxx (ex. 192.168.1.116) and my VPN (VPNC) assign me IP of same class 192.168.1.xxx (ex. 192.168.1.247) Obviously I need VPN to access local address (ex. 192.168.1.100) but when I open any address of the class 192.168.1.xx the route point to my local net and not to the VPN ones. I'm on linux and i'd like gui solution (network manager) in case it is not possible let play with route command. here what network manager offer me: Here my actual route once connected to the VPN: Here some route information (route -n) Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 9 0 0 wlan0 192.168.1.246 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 Here my ifconfig : ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.247 P-t-P:192.168.1.246 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:3415 errors:0 dropped:0 overruns:0 frame:0 TX packets:2525 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:3682328 (3.6 MB) TX bytes:402315 (402.3 KB) wlan0 Link encap:Ethernet HWaddr 4c:eb:42:06:a3:a6 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::4eeb:42ff:fe06:a3a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72598 errors:0 dropped:0 overruns:0 frame:0 TX packets:42300 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76000532 (76.0 MB) TX bytes:13919400 (13.9 MB) The Question So basically I would like to add a rule to force this particular address (192.168.1.100) on the VPN and not on my local net

    Read the article

  • Apache/Jboss Issue - is this connection timeout?

    - by user115391
    We have an application. The architecture is as below 1 load balancer (apache), which redirects to 2 app servers (jboss). The site is working fine and I am able to access it fine. But sometimes, randomly the homepage takes a while (like 30-40 secs) to load. I tried checking the logs but could not figure out why. I used the httptraffic analyzer, fiddler to see the traffic, but it just says the request/response took 30 secs or so. I checked the apache access logs, mod_jk.log. My configurations are below mod-jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log #JkLogLevel info #JkLogLevel debug JkLogLevel error # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T %P %{tid}P %D" JkMount /__application__/* loadbalancer JkUnMount /__application__/images/* loadbalancer <VirtualHost *:8080 > JkMountFile conf/uriworkermap.properties </VirtualHost> JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ----------------------------- uriworkermap.properties Simple worker configuration file # Mount the Servlet context to the ajp13 worker /=loadbalancer /*=loadbalancer ----------------------------- workers.properties worker.list=loadbalancer,status worker.template.port=8009 worker.template.type=ajp13 worker.template.lbfactor=1 worker.template.prepost_timeout=10000 worker.template.connect_timeout=10000 worker.template.ping_mode=A worker.worker1.reference=worker.template worker.worker1.host=hostname1 worker.worker2.reference=worker.template worker.worker2.host=hostname2 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=worker1,worker2 worker.status.type=status ----------------------------- my jboss server.xml - $JBOSS_HOME/server/default/deploy/jbossweb.sar/server.xml --------------------------------- The logs from access log is below The issue where it took time - look at the seconds column [23/Mar/2012:12:10:38 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:10:49 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:11:10 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:11:31 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:11:52 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - The one after the issue x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - Would it be a cache or timeout issue? Any help is appreciated. Thanks.

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • Virtual Machine with Bridged Adapter to Centos not accepting ssh from host machine [migrated]

    - by javadba
    I have a bridged connection on VirtualBox from os/x 10.8.5 host to Centos 5.8 client. But I suspect this is more of a general issue than specific to the host and precise version of linux. Shown below are the networking info from the VirtualBox and from within the guest sshd is running on port 22: [root@oracle-linux ~]# ps -ef | grep sshd | grep -v grep root 3103 1 0 20:22 ? 00:00:00 /usr/sbin/sshd root 14994 3103 0 21:23 ? 00:00:00 sshd: root@pts/1 Port 22 listening: [root@oracle-linux ~]# netstat -an | grep 22 | grep tcp | grep LIST tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN Here are ip addresses, still on the guest os: [root@oracle-linux ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b9:e5:79 brd ff:ff:ff:ff:ff:ff inet 10.0.15.100/24 brd 10.0.15.255 scope global eth0 inet6 fe80::a00:27ff:feb9:e579/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b4:86:8a brd ff:ff:ff:ff:ff:ff inet 10.0.3.15/24 brd 10.0.3.255 scope global eth1 inet6 fe80::a00:27ff:feb4:868a/64 scope link valid_lft forever preferred_lft forever [root@oracle-linux ~]# I can ssh to the guest from the guest: root@oracle-linux ~]# ssh 10.0.3.15 The authenticity of host '10.0.3.15 (10.0.3.15)' can't be established. RSA key fingerprint is ef:08:19:72:95:4d:e5:28:af:f3:6f:54:07:84:ba:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.3.15' (RSA) to the list of known hosts. [email protected]'s password: Last login: Mon Oct 21 21:24:12 2013 from 10.0.15.100 But can NOT ssh from the host to the guest: 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection Here is bridged connection infO; BTW I looked into other answers, and one of them mentioned doing service iptables stop That did not help. Adapter 2 is a NAT, shown below In case NAT is causing any issues, i shut it down and restarted networking. [root@oracle-linux ~]# /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: Still No joy.. 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection

    Read the article

  • Virtual Machine with Bridged Adapter to Centos not accepting ssh from host machine

    - by javadba
    I have a bridged connection on VirtualBox from os/x 10.8.5 host to Centos 5.8 client. But I suspect this is more of a general issue than specific to the host and precise version of linux. Shown below are the networking info from the VirtualBox and from within the guest sshd is running on port 22: [root@oracle-linux ~]# ps -ef | grep sshd | grep -v grep root 3103 1 0 20:22 ? 00:00:00 /usr/sbin/sshd root 14994 3103 0 21:23 ? 00:00:00 sshd: root@pts/1 Port 22 listening: [root@oracle-linux ~]# netstat -an | grep 22 | grep tcp | grep LIST tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN Here are ip addresses, still on the guest os: [root@oracle-linux ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b9:e5:79 brd ff:ff:ff:ff:ff:ff inet 10.0.15.100/24 brd 10.0.15.255 scope global eth0 inet6 fe80::a00:27ff:feb9:e579/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b4:86:8a brd ff:ff:ff:ff:ff:ff inet 10.0.3.15/24 brd 10.0.3.255 scope global eth1 inet6 fe80::a00:27ff:feb4:868a/64 scope link valid_lft forever preferred_lft forever [root@oracle-linux ~]# I can ssh to the guest from the guest: root@oracle-linux ~]# ssh 10.0.3.15 The authenticity of host '10.0.3.15 (10.0.3.15)' can't be established. RSA key fingerprint is ef:08:19:72:95:4d:e5:28:af:f3:6f:54:07:84:ba:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.3.15' (RSA) to the list of known hosts. [email protected]'s password: Last login: Mon Oct 21 21:24:12 2013 from 10.0.15.100 But can NOT ssh from the host to the guest: 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection Here is bridged connection infO; BTW I looked into other answers, and one of them mentioned doing service iptables stop That did not help. Adapter 2 is a NAT, shown below In case NAT is causing any issues, i shut it down and restarted networking. [root@oracle-linux ~]# /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: Still No joy.. 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection

    Read the article

  • Dual booting Windows 7 & 8.1, using the Windows 8 Startup Options Menu, when Windows 8.1 is already installed and you want to add Windows 7

    - by Josh
    There are many excellent guides out there that explain how to dual-boot Windows 7 & 8. However, they are written for people starting with a Windows 7 installation and add a Windows 8 installation to separate partition. From what I'm reading, following this procedure will result in Windows 8 installing and configuring the Startup Options Menu with an option to boot Windows 7 & 8. However, in my situation I have a Windows 8.1 machine that I want to install Windows 7 on, and enable dual-boot, where I can use the Startup Options Menu to select the OS to boot. I haven't been able to determine how to do this. From everything I've been able to find, it looks like if I install Windows 7, it is going to take over the boot loader process, and I won't have access to the Windows 8 "Startup Options Menu." This answer suggests I boot to VHD, but notes a drawback: You can't do this if the C:\drive is encrypted using ANY encryption shceme. Be that BitLocker or 3rd party. The location of the .VHD file you are booting to must reside on an unencrypted volume. Well, that's a bummer, because that's exactly what I wanted to do--I wanted my Windows 7 partition to be encrypted, and my Windows 8 partition to also be encrypted. The idea being that when OS was booted, it was completely locked out from accessing data on the other OS's partition. At this point, I'm thinking my only option is to install Windows 7, and then re-install Windows 8, which will give me the dual-boot option... am I right? Or is there a way to make this work. I'm thinking that I would need to figure out a process like this: Configure the Windows Startup Options Menu with a "blank" entry for Windows 7, pointing to an empty partition Insert the Windows 7 installation media, install Windows 7, and somehow restrict it to that partition (i.e., prevent it from "taking over" from the Startup Options Menu" Is this possible, and if so, how can I accomplish this? My concern is that if I simply install Windows 7 to a separate partition, Windows 7 will take over the entire boot process and I won't be able to get to my Windows 8 installation any more.

    Read the article

  • 2 Servers setup for redundency, backup

    - by minal
    I presently have 1 dedicated virtual server running my website/blog/mail, etc. This is on Hyper-V with 512MB RAM. Windows Web2008. With the VM, I have these running within it: SmarterMail – for emails MS DNS – I have my own nameservers on this server SQL Express IIS7 2 IP Address I have now leased 2 physical servers : P4 2.6Ghz 1GB RAM 80GB HDD. With these new servers, I get 2 IPs per server as well. These are running Windows 2008 Standard. With the VM the HDD was obviously on a RAID setup so I was not worried about hardware issues as it fell on the provider to manage. However, with the new servers the HDD is not RAID’d, hence my concern is that if it fails I need a backup position. What would be the most ideal setup to go for? I am thinking: Server 1: (Web/PrimaryDNS) DNS – NS1 SQL Express – OFF turn on when required, ie. Server2 is down SmarterMail – OFF turn on when required, ie. Server2 is down IIS 7 Server2:(SQL/Backup) DNS – NS2 SQL Web Edition SmarterMail IIS 7 How can I set it up so that if 1 goes down I can have everything on 2 instantly or by manual switching over. I am confused as other DNS servers will cache the web servers IP address for requests, and if that server goes down, the backup server will have a different IP. How do I make this work? I will be doing routine backups, in which case I will keep copies of backups on both servers. If I am copying the same stuff on both servers like a mirror then I am losing on using the true performance out of it. It's like 1 server is always on standby. Ideally I want SQL and web on 2 diff machines for best performance. If Server1 goes down, I should be able to switch to Server2 fairly easily. I don't have a problem with manual intervention to start the sql/mail services, etc. In terms of scalabilty, the VM has coped pretty well to date. Moving forward the SQL and IIS workload is going to double pretty quickly. Some ideas would be great.

    Read the article

  • IKE Phase 1 Aggressive Mode exchange does not complete

    - by Isaac Sutherland
    I've configured a 3G IP Gateway of mine to connect using IKE Phase 1 Aggressive Mode with PSK to my openswan installation running on Ubuntu server 12.04. I've configured openswan as follows: /etc/ipsec.conf: version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn net-to-net authby=secret left=192.168.0.11 [email protected] leftsubnet=10.1.0.0/16 leftsourceip=10.1.0.1 right=%any [email protected] rightsubnet=192.168.127.0/24 rightsourceip=192.168.127.254 aggrmode=yes ike=aes128-md5;modp1536 auto=add /etc/ipsec.secrets: @left.paxcoda.com @right.paxcoda.com: PSK "testpassword" Note that both left and right are NAT'd, with dynamic public IP's. My left ISP gives my router a public IP, but my right ISP gives me a shared dynamic public IP and dynamic private IP. I have dynamic dns for the public ip on the left side. Here is what I see when I sniff the ISAKMP protocol: 21:17:31.228715 IP (tos 0x0, ttl 235, id 43639, offset 0, flags [none], proto UDP (17), length 437) 74.198.87.93.49604 > 192.168.0.11.isakmp: [udp sum ok] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->0000000000000000: phase 1 I agg: (sa: doi=ipsec situation=identity (p: #1 protoid=isakmp transform=1 (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180)))) (ke: key len=192) (nonce: n len=16 data=(da31a7896e2a19582b33...0000001462b01880674b3739630ca7558cec8a89)) (id: idtype=FQDN protoid=0 port=0 len=17 right.paxcoda.com) (vid: len=16) (vid: len=16) (vid: len=16) (vid: len=16) 21:17:31.236720 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 456) 192.168.0.11.isakmp > 74.198.87.93.49604: [bad udp cksum 0x649c -> 0xcd2f!] isakmp 1.0 msgid 00000000 cookie da31a7896e2a1958->5b9776d4ea8b61b7: phase 1 R agg: (sa: doi=ipsec situation=identity (p: #1 protoid=isakmp transform=1 (t: #1 id=ike (type=enc value=aes)(type=keylen value=0080)(type=hash value=md5)(type=auth value=preshared)(type=group desc value=modp1536)(type=lifetype value=sec)(type=lifeduration len=4 value=00015180)))) (ke: key len=192) (nonce: n len=16 data=(32ccefcb793afb368975...000000144a131c81070358455c5728f20e95452f)) (id: idtype=FQDN protoid=0 port=0 len=16 left.paxcoda.com) (hash: len=16) (vid: len=16) (pay20) (pay20) (vid: len=16) However, my 3G Gateway (on the right) doesn't respond, and I don't know why. I think left's response is indeed getting through to my gateway, because in another question, I was trying to set up a similar scenario with Main Mode IKE, and in that case it looks as though at least one of the three 2-way main mode exchanges succeeded. What other explanation for the failure is there? (The 3G Gateway I'm using on the right is a Moxa G3150, by the way.)

    Read the article

  • !! 0xc01a00d !! aka Vista won't boot

    - by Chris
    Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box. One of my users has an HP DV5-1235dx laptop running Windows Vista Professional x64. Last night, our WSUS server pushed out a few updates including "Security Update for Windows Vista for x64-based Systems (KB960859)". When we try to boot the laptop today, a black screen with white text comes up displaying: xxx/169894 (something) Where xxx increments rapidly and something is some dll or registry key. Eventually that stops and the screen displays !! 0xc01a00d !! 35566/169894 (\Registry\Machine\COMPONENTS\DerivedDat...) No other computers that received this update are displaying the same error. So far I've tried running CHKDSK off of HBCD. It repaired a thing or two, but the computer still doesn't boot. I tried repairing the Windows install from the Vista CD, but I get a black screen with white text displaying something along the lines of: 0 No Emulation System Type 00 1 No Emulation System Type 00 Select one of the above Booting in Last Known Good Configuration doesn't work. Booting in Safe Mode freezes at Loading Windows Files [snip] Loaded: \windows\system32\drivers\crcdisk.sys Please wait... My next step is trying to boot Safe Mode with Command Prompt and try to run rstrui.exe. While I do that, does anybody have any guidance? Edit: Booting into Safe Mode with Command Prompt will not work. See Booting in Safe Mode above. Edit 2: I managed to boot from the Vista DVD. I ran the system repair, and now I get a black screen with white text saying: !! 0xc0000034 !! 290/169894 (_0000000000000000.cdf-ms) Edit 3: I ran the system repair again, and it attempted to repair my hard drive. It failed. Problem Signature: Problem Event Name: Startup Repair V2 Problem Signature 01: External Media Problem Signature 02: 6.0.6001.18000.6.0.6001.18000 Problem Signature 03: 4 Problem Signature 04: 196611 Problem Signature 05: CorruptVolume Problem Signature 06: NoBootFailure Problem Signature 07: 0 Problem Signature 08: 0 Problem Signature 09: unknown Problem Signature 10: 1168 OS Version: 6.0.6002.2.2.0.256.1 Locale ID: 1033 Answer: Parts of the hard drive are corrupted. All of my user's code was checked in, so I'm just going to format the box.

    Read the article

  • How to solve/disable spam sending with my postfix server on linux

    - by Dukla
    I am quite new in setting up e-mail server on linux - I barely set up the whole think to get it working and connected it with my domain and php script which uses PHPMailer 5.2.1. In my setting I am using smtp server from my web provider (domain) and all e-mail which are not defined (trash) are sent on one simple address like I have address [email protected]. So when somebody will send email to [email protected] it will be forwarded again to [email protected] even in case of failure. I am receiving emails like: Hi. This is the qmail-send program at comercio.interone.com.br. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: Sorry, no mailbox here by that name. (#5.1.1) --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 49156 invoked from network); 25 Jun 2012 07:34:57 -0300 Received: from unknown (HELO S0106602ad08df877.no.shawcable.net) (70.66.34.103) by hosting.interone.com.br with SMTP; 25 Jun 2012 07:34:57 -0300 Message-Id: <20120625034039.B45C12DCC3B13A22F261@GANDERTO-015445> From: Ezra Whitehead <[email protected]> To: toa <[email protected]> Reply-To: Jamey Mcconnell <[email protected]> Subject: Welcome toa Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Visit our shop http://44090.medicneed.ru/ 113B726C73560AA41A68163AA474D5F1476 0225770686522678 As you can see there is line From: Ezra Whitehead <[email protected]> I am sure I did not send this email from my domain.com with some Davis8FB name and some russian page. This is just one of many and only NOT-delivered e-mails - there can be much more which has been sent successfully! What do I have wrong in my settings? How can I make it right? What should I do to prevent these messages to send? Where should I look? Thank you all.

    Read the article

< Previous Page | 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615  | Next Page >