Search Results

Search found 3463 results on 139 pages for 'physical'.

Page 111/139 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Creating a Jenkins build farm in a hands-off manner?

    - by user183394
    My colleague and I have set up and run Jenkins on a KVM guest running Ubuntu 12.04 with good results for a while now. We are thinking about deploying a cluster of Jenkins CI hosts in master/slave configuration, with the libvirt slave plugin to keep our hardware count low. Our environment is strictly Linux (CentOS, Scientific Linux, Fedora, and Ubuntu). Both of us are competent in setting up large clusters. We typically use tools like cobbler + a configuration management tool (Puppet, Chef, and alike) to set up a large number of machines (physical and/or virtual) hands off (hundreds of nodes in less than an hour typical). We would like to do the same for nodes running Jenkins. But the step by step guide doesn't give us any clues in this regard. I did see a Multi-slave config plugin. But, being used to dealing with hundreds or more machines completely hands-off, clicking the UI for many machines just doesn't feel right. Can someone point to us a reference that talks about how to set up large cluster of Jenkins CI hosts more in the hands-off way?

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • EC2 AMI won't boot after edit

    - by Eric Lars0n
    I did something stupid, I got a new laptop and copied everything over to the new one, then wiped the old one clean. Then I realized that I forgot to copy the private key out of .ssh that I use to connect to my AWS EBS backed instance. So I can't log in to my custom AMI. So I created a new Volume from the Snapshot of the AMI, then started up a public instance and attached the Volume to it, edit the sshd_config to allow for password log in. Unmounted the volume, detached it, made a snapshot of it, then made a new AMI from the snapshot. The new AMI launches, but never passes the Status Checks and is not reachable. What am I doing wrong? Or alternatively how can I fix my problem? Edit: Adding some of the console output Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize NET: Registered protocol family 2 Registering block device major 8 XENBUS: Timeout connecting to devices! Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)

    Read the article

  • Do background processes get a SIGHUP when logging off?

    - by Massimo
    This is a followup to this question. I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null. The behaviour is exactly the same: Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg); this is done without using nohup. Log off. Log on again. The process is still there, running happily, and is now a direct child of init. I can confirm both SCP and CAT quits immediately if sent a SIGHUP; I tested this using kill -HUP. So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons). This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4. The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening? Edit I checked with strace, as per Kyle's answer. As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH.

    Read the article

  • How do I lower the hardware volume? (volume too high)

    - by Zom-B
    I have a 4yo Dell laptop with Windows XP Pro (modern ones unfortunately don't have a physical volume knob), and lately I'm using my Apple earphones, because they have much better low frequency response than my $10 earphones. They also have the side effect of being much louder. To give an indication of my agony, for most tasks (movie, music, games) I have my main volume at 3 ticks: drag to 0 with the mouse and press the up key 3 times (the handle does not even raise 1 pixel) and my wave volume at 50%. I notice that when I do this, I have a lot of digital noise, because I'm using just a tiny fraction of the 16-bit space. If I drag the Wave slider down until I barely hear the audio, it becomes really distorted and noisy, indicating that this is digital volume (in the DirectSound driver or something) and not hardware volume. I experimented in Audition. When I make a tone of 1000Hz at -50db, (all windows volumes at max) the volume is just below my pain threshold. When I zoom in to see how high the sample values reach, I see that just 8 of the 16 bits are used (about -100 ~ 100). When I generate such tone at -80db (minimum I can specify) then I can still clearly hear the tone, although really noisy. When I zoom in, I see that just 3 out of 16 bits are used. I created a squarewave tone that is just 1 bit high, and I can still hear it! For most uses, this is not a big problem (audiophiles will disagree!), as I just have more noise than usual (about the same as old 8 bit hardware), but I'm also in the process of programming a hearing test program, in which case this problem is a death blow as the test subjects will even hear tones at the bottom of the theoretical range (lowering the windows volume is futile, see above) (I cannot update drivers, as Dell has discontinued XP support for my model)

    Read the article

  • Extending partition on linux gparted but not more space in the vm

    - by Asken
    I have a vm test installation of a linux running a build server. Unfortunately I just pressed ok when adding the disk and ended up with an 8gb drive to play with. Well into the test the builds are consuming more and more space, of course. The vm drive was resized to 21gb and using gparted I expanded the drive partitions and that all worked fine but when I go back into the console and do df there's still only 8gb available. How can I claim the other 13gb I added? fdisk -l Disk /dev/sda: 21.0 GB, 20971520000 bytes 255 heads, 63 sectors/track, 2549 cylinders, total 40960000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006d284 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 40959999 20229121 5 Extended /dev/sda5 501760 40959999 20229120 8e Linux LVM vgdisplay --- Volume group --- VG Name ct System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.29 GiB PE Size 4.00 MiB Total PE 4938 Alloc PE / Size 1977 / 7.72 GiB Free PE / Size 2961 / 11.57 GiB VG UUID MwiMAz-52e1-iGVf-eL4f-P5lq-FvRA-L73Sl3 lvdisplay --- Logical volume --- LV Name /dev/ct/root VG Name ct LV UUID Rfk9fh-kqdM-q7t5-ml6i-EjE8-nMtU-usBF0m LV Write Access read/write LV Status available # open 1 LV Size 5.73 GiB Current LE 1466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Name /dev/ct/swap_1 VG Name ct LV UUID BLFaa6-1f5T-4MM0-5goV-1aur-nzl9-sNLXIs LV Write Access read/write LV Status available # open 2 LV Size 2.00 GiB Current LE 511 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

    Read the article

  • virtual web folder served by PHP script

    - by Martin
    I am trying to configure my apache to be able to display (virtual) pages like: mywebpage.com/something1 mywebpage.com/something2 mywebpage.com/folder/something3 I would like these "somethingX" and "folder" folders to be only virtual, not physical directories. For a start it would be great to send all requests to mywebpage to one PHP script which will somehow receive the original path information (there is some SERVER array as far as I know) and call necessary PHP functions (so far I use addresses like mywebpage.com/index.php?page=blabla&otherparameters=values...). Is that possible? I am struggling with different combination, currently I am with following file in /etc/apache2/conf.d/something.conf (not working of course). What is the correct way to proceed? Thanks. <Location /myweb> SetHandler my-handler Action my-handler /srv/www/htdocs/myweb/product.php virtual </Location> My pages are in /srv/www/htdocs/myweb. I tried with Location, with Directory, with Action and SetHandler, with AddHandler... ;-) Some configurations were ignored, some caused "object not found" with nothing relevant in error log.

    Read the article

  • Recover badly recorded DVDs

    - by CesarGon
    A few years ago (2003-2005) I bought a Sony USB external DVD recorder for my Dell laptop and I used it to burn a lot of discs. Much later, when I tried to use one of these discs, I realised that I could not read it. The disc behaved as if it was scratched or dirty. I tried on a couple of different DVD drives but got the same effect. Sadly, all the discs that I burnt with that recorder suffer from the same problem. Edit. When I read one of these discs with ImgBurn, I get lots of unrecovered read errors in multiple sectors, even at 1x speed. The sectors that cause read errors seem to be quite random; it's not always the same one. I have no idea what could be wrong with the discs. I doubt that they are scratched or dirty; it would be too much of a coincidence that all the discs that I burnt with that recorder got damaged at the same time. Also, they don't show any physical defects. Is there any way to diagnose what the problem is and, hopefully, recover the contents of the discs? Many thanks.

    Read the article

  • Outdoor WiFi Mesh Topology vs. Repeaters

    - by IronJaxor
    Here's the current configuration in our organization (which I believe is incorrect): We have a number of Cisco 1500 series AP's (22 in total), that are mounted outdoors to provide seamless WiFi coverage over a large area. Each AP however has its own physical ethernet connection back to the WLC (All the AP's are marked as Root AP's). They are all broadcasting the same SSID. We have tried to stagger the channel selection but because there are only three non-overlapping channels to choose from, and in some areas the density of AP's is quite high, there is multiple places of channel interference. With this configuration we experience 100-150 disconnects from clients every day. (Our clients are mobile so they move throughout the coverage area constantly). My idea is to switch the AP's to the same channel thereby forming a wireless mesh, use the built in functionality of the 1500 series to use 802.11a as the backhaul, designate one or two AP's as root AP's and wire them back to the WLC. Thereby forming a WiFi mesh, which if I'm not mistaken is the point of the 1500 series in the first place! I am however completely new at WiFi networks and wondering if I am simply mistaken in what I believe my proposed changes will enable, or if there is a better way to tackle the WiFi topology.

    Read the article

  • a disk read error occurred [closed]

    - by kellogs
    Hi, ¨a disk read error occurred¨ appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I can not recall how exactly have I used testdisk but it once said that ¨The harddisk /dev/sda (160GB / 149 GB) seems too small! (< 172GB / 157GB)¨ or something simillar. So far I have tried to ¨fixboot¨ and ¨chkdsk¨ from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please ? Thank you!

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Why can't a PC with 2 network cards be accessed by hostname?

    - by lewis
    I set up PC with 2 network cards, connected to the same LAN. I can connect to this PC (e.g. by remote desktop) only via ip-addresses. Accessing by hostname does not work. Why is this the case? UPDATE: Full environment 1. PC with 2 hardware network adapters. 2. On this PC installed VMWare Workstation. Created 3 VM's, networked by "bridged" network setting in VMWare. 3. In LAN all ip-addresses given from DHCP. 4. Win2k8 on all hosts (both physical and vitrual). As result: 1. PC has 2 ip-address (e.g. 192.168.1.71 and 192.168.1.72). PC available in LAN by ip-addreses, but not avail by hostname. 2. VM's has own ip-addr on each (e.g. 192.168.1.73, *74, *75 etc). They are available from LAN by their ip's, BUT not by their hostnames. How can I access to PC and to VM's by hostname?

    Read the article

  • Same native and tagged vlan possible on Redhat?

    - by Chris Phillips
    Hi guys and gals, I'm looking at implementing a systems using a number of tagged and a native vlan connected to a server over a a/p bonded interface. The untagged vlan is for physical machine access, the tagged vlans are connected to bridges and then to QEMU VM's inside the machine. Hopefully this plan is fine, but I'm trying to implement a crippled version of this in a dev environment due to a lack of underlying network config in this location where I just have the same single vlan delivered to the machine on a tag AND plain. I'm nto clear if this is going to work (and that I should just be confident that it will work using different vlans) as I'm seeing odd things like a vm is arping out over the vlan out to the core switch, but the arp reply is coming back on the untagged interface. Now an ARP reply is unicast right? So it's a deliberate thing to send the ARP response on the untagged interface, and not a case that a broadcast response isn't being passed on the tagged side... i.e. there's some underlying logic pushing it that way. Something about the MACs somehow? This is on a CentOS 5.5 machine, vlan's from vconfig. (I've seen reference to the Linux mac-vlan project work, but that's not available here by default.) so 1) Should having the SAME vlan tagged and untagged work? 2) Will different tagged vlans to the untagged interface work nice and easily?

    Read the article

  • How to tell Linux to explicitly swap out main memory of a suspended process?

    - by Vi
    I run a memory-hungry process (mkcromfs) which consumes more memory than I have physical memory on my latop, so it is paging and swappin and thrashing all the time and loadavg is about 2 (compcache is already in use with usual swap partition as well), but slowly moving forward (Although I afraid it will finally try to allocate 2GB and crash draining 2 days of thrashing). When I want to use the laptop for something else, I stop the process, start X server, firefox and other programs. The problem is that when I start Firefox the loadavg jumps to 10 and the system becomes almost unresponsive at all (long time to turn on/off caps lock, slow mouse cursor position updates, slow switching from X server to Linux console, slow login). The stopped mkcromfs still holds a lot of memory (464.8 MiB and slowly falling) and moves it to swap only when more memory is needed for some other program, which results in a great slowdown. How to tell the Linux to swap out this process entirely (e.g. I'm not intending to resume it in short term), possibly waking from swap other data? Also it will be useful to be able to specify the exact swap device to swap the given process out.

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • Can't ping localhost/or reach locally hosted domain

    - by Ian
    I can't reach a locally hosted domain, and in testing I have discovered I can't ping localhost or the actual IP either. OS is Windows7 64bit, Pro. DNS works, I can ping others on my network, they can ping me, and they can reach the hosted domain. The ONLY problem I have found is that I can't reach the locally hosted domains! C:\Users\ianipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : leda Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : hcs Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : hcs Description . . . . . . . . . . . : Atheros AR8121/AR8113/AR8114 PCI-E Ethern et Controller Physical Address. . . . . . . . . : 00-23-54-7C-E2-2A DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.0.12(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 DNS Servers . . . . . . . . . . . : 192.168.0.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter VirtualBox Host-Only Network #2: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VirtualBox Host-Only Ethernet Adapter #2 Physical Address. . . . . . . . . : 08-00-27-00-88-4A DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Autoconfiguration IPv4 Address. . : 169.254.205.215(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : NetBIOS over Tcpip. . . . . . . . : Enabled C:\Users\ianping localhost Pinging leda [127.0.0.1] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 127.0.0.1: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Users\ianping coachmaster.leda.hcs Pinging coachmaster.leda.hcs [192.168.0.12] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for 192.168.0.12: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Users\ian I can reach a hosted VM in VirtualBox and the VM can browse the hosted sites. I've removed Zone Alarm and disabled Windows Firewall - same results. So how can I browse my locally hosted sited? What could be blocking it? Thanks Ian

    Read the article

  • How to improve Samba performance on VirtualBox machine?

    - by ColinM
    I am running a Windows 7 64bit host and Ubuntu 9.04 32bit guest inside of VirtualBox 4.0.0 on a laptop which has internet connectivity via Wifi. The main use is writing code for which I use Netbeans. My dev environment is hte virtual machine and I use Samba on the VM to share the code directory so that I can use Netbeans on the host as my IDE. Unfortunately Netbeans does a lot of disk access and due to the poor Samba performance it makes the IDE hardly usable. How can I improve performance of the Samba share? On my desktop it isn't so bad but I don't know what the difference would be since they are similar setups (Win 7 hosts, cloned guests, SSDs, Vbox guests using SATA in AHCI mode, etc..). With Bridged networking is the performance between the host and guest limited by the physical hardware (Intel 6200 AGN on laptop)? I switched to Host-only and it didn't seem to improve performance at all. To clarify bad performance, I used 7zip to zip a project directory and got 19kbs to 500kbs depending on the size of the files being zipped. On my desktop it was in the ~10mbs range. Any tips for VirtualBox/Samba configuration to get improve the performance? I am using Samba 3.3.2. Hopefully Samba with SMB2 support will be released soon..

    Read the article

  • Rescue system running TFS that BSODs, into vmware esxi

    - by 3molo
    Hi, After moving to new facilities, one of our old Dell servers running Windows Server 2003 R2 on PowerEdge 2650 HW BSODs with 0x8e. The server runs Team Foundation Server, so we have a few guys dependent on it. No one here knows TFS, so we have no idea how difficult it would be to setup from scratch. We have the MSSQL database(s) backed up, recent and fresh copy. Tried removing/refitting memory modules, but with no success. The system boots into safe mode but hangs occasionally. I booted a linux livecd and did a dd of both c: and d:, so I have all the data in compressed images on a vmware machine. For the guest, I created a 38G (actually it became 40GB) partition to act as C:, and booted a live cd. I then uncompressed the compressed disk image of c: and dd'd it to the new c: using 'gunzip -dc c.img.gz | dd of=/dev/sda1 bs=1M'. The operation ran for about 1000 seconds, and completed successfully. I assumed it would at least try to boot windows (but most likely BSOD due to not having correct drivers), but the Vmware ESXi guest does not seem to recognize it as a bootable disk. We don't have the vmware enterprise license, so the vmware converter cold cloning is not an option. Did I do something wrong in my dd's etc with the ISOs, or why would it not (try to) boot? Am I wasting my time? What other approach is there? Will continue to try to remove services and drivers to make the physical machine at least work reasonably well in safe mode. What do you suggest? 1. Continue to get the dd'd images to the virtual disk and get it to boot. 2. Install a new windows server, get team foundation server and restore from backup. 3. Focus on the old problematic hardware Any help appreciated

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • Fixing damaged partition table

    - by dr4cul4
    This is continuation of Recover Extended Partition , but this time I have different problem related partition table it self. I managed to restore partition that I needed and backed up files that were crucial to me (at least those that I had space to store somewhere) OK now get to the problem. My partition table is corrupted, booting RIP Linux I can mount it in truecrypt (and other ones that recovered), but that's basically it. When I launch GParted I have unallocated drive. GParted Dev info: Device Information Model: ATA ST2000DL003-9VT1 Size: 1.82TiB Path: /dev/sda Partition table: unrecognized Heads: 255 Sectors/track: 63 Cylinders: 243201 Total Sectors: 3907029168 Sector size: 512 When I check information on unallocated space I get: File system: unallocated Size: 1.82TiB First sector: 0 Last sector: 3907029167 Total sectors: 3907029168 Warning: Can't have a partition outside the disk! Now the output of testdisc (Analyze): TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Partition Start End Size in sectors > 1 P Linux 13132 242 39 16353 233 8 51744768 2 E extended LBA 16807 223 1 243201 254 63 3637021626 No partition is bootable 5 L Linux 16807 223 57 20430 39 25 58191872 X extended 20430 70 1 243201 78 13 3578816632 Invalid NTFS or EXFAT boot 6 L HPFS - NTFS 20430 71 58 243201 78 13 3578816512 6 LNext Now fdisk: # fdisk -l /dev/sda Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00039cd0 Device Boot Start End Blocks Id System /dev/sda1 210980864 262725631 25872384 83 Linux /dev/sda2 270018504 3907040129 1818510813 f W95 Ext'd (LBA) /dev/sda5 270018560 328210431 29095936 83 Linux /dev/sda6 328212480 3907028991 1789408256 7 HPFS/NTFS/exFAT Now I would like to fix that to arrange partitions correctly, but I have no idea which tool is capable of fixing that (tried, a few, some of them offered fixing, but it was to risky at the moment - still backing up data).

    Read the article

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • Why does my Windows 8 Pro Hyper-V guest have no internet?

    - by Perplexed
    Trying to get this working on my Windows 8 Pro machine. I created an External Switch Assigned the newly available adapter to a Guest machine with Win 2008 os. My host has internet connection. Host can ping Guest, Guest cannot ping Host. Guest has no internet connection. Pasting the IP of both host and guest. HOST ========================== Ethernet adapter vEthernet (EXTSW01): Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Hyper-V Virtual Ethernet Adapter #2 Physical Address. . . . . . . . . : 9C-B7-0F-0F-D7-D0 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::5434:a9fd:8611:d207%54(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.0.15(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Saturday, September 8, 2012 12:34:44 PM Lease Expires . . . . . . . . . . : Saturday, September 15, 2012 12:34:44 PM Default Gateway . . . . . . . . . : 192.168.0.1 DHCP Server . . . . . . . . . . . : 192.168.0.1 DHCPv6 IAID . . . . . . . . . . . : 916240141 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-DC-C9-2C-9C-B7-0D-0D-D7-D0 DNS Servers . . . . . . . . . . . : 64.71.255.999 NetBIOS over Tcpip. . . . . . . . : Enabled GUEST ========================== Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter Physical Address. . . . . . . . . : 00-15-5D-3F-0F-00 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::953f:ec5c:5d84:1b50%11(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.0.20(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 0.0.0.0 DHCPv6 IAID . . . . . . . . . . . : 234886493 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-DD-2F-29-0F-15-5E-00-0F-00 DNS Servers . . . . . . . . . . . : ::1 127.0.0.1 NetBIOS over Tcpip. . . . . . . . : Enabled

    Read the article

  • Hyper-v and sql server connections for web apps

    - by Rick Ratayczak
    I have a physical machine running win8, and two VMs in hyper-v client: 1 web server, 1 sql server. The web server works fantastic. The sql is the one that is giving me the problem. I can connect to it with server explorer in visual studio or management studio just fine, and it's blazing fast. The problem happens when I use the same connection string I am using in visual studio server explorer in the web.config for an app. data source=VMSQL1;initial catalog=OtherShell;persist security info=True;user id=OtherShell;password=****;network library=dbmssocn;MultipleActiveResultSets=True;App=EntityFramework I made sure it was also using tcp-ip, but it doesn't connect with or without the network library part of the connection string. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) This is driving my batty for the last two days, any ideas? It fails from the web vm too, but works in management studio with the same connection string.

    Read the article

  • Kickstarting an Ubuntu Server 10.04 installation (DHCP fails)

    - by William
    I'm trying to automate the network installation of Ubuntu 10.04 LTS with an anaconda kickstart and everything seems to running except for the initial DHCP autoconfiguration. The installer attempts to configure the install via DHCP but fails on its first attempt. This brings me to a prompt where I can retry DHCP and it seems to always work on the second attempt. My issue is that this is not really automated if I have to hit retry for DHCP. Is there something I can add to the kickstart file so that it will automatically retry or better yet not fail the first time? Thanks. Kickstart: # System language lang en_US # Language modules to install langsupport en_US # System keyboard keyboard us # System mouse mouse # System timezone timezone America/New_York # Root password rootpw --iscrypted $1$unrsWyF2$B0W.k2h1roBSSFmUDsW0r/ # Initial user user --disabled # Reboot after installation reboot # Use text mode install text # Install OS instead of upgrade install # Use Web installation url --url=http://10.16.0.1/cobbler/ks_mirror/ubuntu-10.04-x86_64/ # System bootloader configuration bootloader --location=mbr # Clear the Master Boot Record zerombr yes # Partition clearing information clearpart --all --initlabel # Disk partitioning information part swap --size 512 part / --fstype ext3 --size 1 --grow # System authorization infomation auth --useshadow --enablemd5 %include /tmp/pre_install_ubuntu_network_config # Always install the server kernel. preseed --owner d-i base-installer/kernel/override-image string linux-server # Install the Ubuntu Server seed. preseed --owner tasksel tasksel/force-tasks string server # Firewall configuration firewall --disabled # Do not configure the X Window System skipx %pre wget "http://10.16.0.1/cblr/svc/op/trig/mode/pre/system/Test-D" -O /dev/null # Network information # Start pre_install_network_config generated code # Start of code to match cobbler system interfaces to physical interfaces by their mac addresses # Start eth0 # Configuring eth0 (00:1A:64:36:B1:C8) if ip -o link show | grep -i 00:1A:64:36:B1:C8 then IFNAME=$(ip -o link show | grep -i 00:1A:64:36:B1:C8 | cut -d" " -f2 | tr -d :) echo "network --device=$IFNAME --bootproto=dhcp" >> /tmp/pre_install_ubuntu_network_config fi # End pre_install_network_config generated code %packages openssh-server

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >