Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 610/1953 | < Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >

  • /usr/bin/install hangs, apparently due to SELinux

    - by Cooper
    I'm trying to use the GNU coreutils install utility, however it is hanging: /usr/bin/install -v test_file test_dir/ `test_file' -> `test_dir/test_file I see the same behavior whether I run as a normal user, or root/sudo. I ran an strace -f, and this is the end of the output: ... read(4, "<username>\t-d\tsystem_u:object_r:ho"..., 4096) = 2197 <0.000012> brk(0x6e3b1000) = 0x6e3b1000 <0.000009> mmap(NULL, 29138944, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2abd831ae000 <0.000014> munmap(0x2abd815dd000, 29138944) = 0 <0.003466> The read() is reading from /etc/selinux/targeted/contexts/files/file_contexts.homedirs, apparently successfully. It appears that the process is hanging right after the munmap, but continues to eat 100% CPU. My two questions are: 1) Any good way to see what is going on with the process? I'm currently too lazy to compile a debug version of install I can run gdb on - but a strong suggestion in an answer here may motivate me to do so if needed. 2) Any idea what the SELinux issue could be? I'm not too familiar with SELinux. Additional info of possible relevance: # ls -Z drwxr-xr-x my_user 7001 user_u:object_r:user_home_t test_dir -rw-r--r-- my_user 7001 user_u:object_r:user_home_t test_file # id ... context=user_u:system_r:unconfined_t # uname -a Linux hostname 2.6.18-238.1.1.el5 #1 SMP Tue Jan 4 13:32:19 EST 2011 x86_64 x86_64 x86_64 GNU/Linux I am suspicious that SELinux + Quest Authentication Services (QAS) is causing the issue. QAS is generally well behaved, but it did cause the /etc/selinux/targeted/contexts/files/file_contexts.homedirs to get quite large (~18k users, @23 lines per user) Update: install -v -Z user_u:object_r:user_home_t file dir/ seems to work. Can anyone suggest why, given that SELinux is in permissive mode (see comments).

    Read the article

  • Windows 7 DVD doesn't boot up, neither does USB. :'(

    - by Manan Shah
    My problem is that i'm not able to install windows 7. Been trying to install this since past 1 week. The methods i've tried are: I have a windows 7 bootable DVD which doesnt boot up. (I've set BIOS to boot from DVD ROM first but it just won't boot from the DVD). Tried to install Windows 7 from the same DVD to a friend's PC and it worked. So the DVD has no issues. I tried to run 'Setup.exe' from within the DVD. The two options pop-up 'Check compatibility' and 'Install now'. On clicking install now, after sometime, an error is encountered with the message 'Windows was unable to create a required installation folder' error code:0x8007000D. I am running Windows XP Professional and there's only one user on the PC which is the Admin, so i do not know why is the setup not getting permissions. I've also uninstalled my antivirus, CD burning software, disabled firewall and disconnected all other devices, but its still the same. I tried to install it from a USB device by making it bootable but that too doesnt work. (Yes the mobo supports booting from the USB). The problem is that XP does not recognize a 'USB' device on boot. Rather it shows this USB stick as a removable 'Hard Drive'. Furthermore, i changed the order of Hard Drive boot to boot from this removable Hard Drive first, it still boots my existing OS. Is there anything else that can be done? Any help would be greatly appreciated. :) Please ask if any other information is required, this post is becomimg increasingly long to add any other details. PS: I want to dual boot windows 7 with my existing XP, but that would be after i manage to run the windows 7 setup in the first place. PPS: Please bare with any 'not-so-technical' terms, i am a beginner with this. Again, thank you for taking the time and trying to help, really appreciate it. :)

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • Why is writing to my external hard drive slow, while benchmarks show fast writing?

    - by matix2267
    I have an iOmega eGo 320GB portable drive connected through USB2.0 to my laptop running Windows Vista. It's been working fine for quite some time until recently it became very slow when writing e.g. when copying ~300MB movie over to the drive at first it is extremely fast but it actually doesn't write it only puts in cache and then hangs on last 10-20MBs for about a minute. When copying larger files it's the same story: starts fast but then slows down to ~5MB/s (sometimes even slower down to 2MB/s). Strange thing is that I have always had caching disabled for this drive (it was disabled by default and I never bothered changing it). At first I thought that the disk is dying so I checked S.M.A.R.T. values and everything is fine there. I also run chkdsk and it seemed to fix the problem - it worked fast for a few minutes but then it slowed down again. I also tried plugging it into another USB port - no difference. Additionally I noticed that reading under certain circumstances is sometimes slower e.g. loading times for some games are ~10 times longer, whereas simple copying files from this drive to my internal HDD is fast. I ran a speed benchmark using CrystalDiskMark with a 5x100MB run and strangely got these results: read write (MB/s) Seq 33.05 28.25 512k 17.30 15.27 4k 0.267 0.372 4kQD32 0.510 0.260 This is different from what most other people have (I've found many threads about slow disk write while googling but all of them were slow on benchmarks too) which is why I decided to post this problem here. BTW most of the time when writing (or sometimes reading) the activity led is mostly idle (blinks a while and then stops for longer, sometimes has slower blinks ~1 sek, sometimes goes off for a few seconds - extremely long blink :) ) but when benchmarking, defragmenting or just reading (copying from this drive, installing apps from installers there, watching HD videos) it is blinking really fast (like it should) and there are no slowdowns. It shouldn't be driver issue unless stock Windows drivers have some issues I'm not aware of.

    Read the article

  • Dell XPS 1530 DVD-RW firmware problem

    - by josecortesp
    Hello everyone. I have this XPS laptop since a year and a half ago. About 2 months ago, with the waranty expired, i tried to run the optiarc slot load DVD-RW firmware update, and said everything was okay. Then I restarted it and the problem started: Now, every single time I turn on my computer, it get stuck at the BIOS POST until the drive sounds like "ready" and then the computer starts normally. And this happen EXACTLY the same when it's getting back from sleep. I'm pretty sure is not a software issue, because I tried with Vista Home Premiun 32 bits, Ubuntu (from 8.10+)32 & 64, and with W7 64. Already tried to run the firmware installer like a million time, in case it is a failing install with no luck. Also, google it to see if someone has the same problem, and again, no. The Drive performs pretty okay once the System is on, but waiting to the drive to be ready everytime is really annoying. The Firmware I updated was this, and the drive is: K937C Assembly, Dvd+/-rw, 8, SLOT, 1530 Sony Nec Optiarc Inc. I'll apreciate any help you can give me

    Read the article

  • How should I deploy my JVM-based web application on ubuntu?

    - by Pieter Breed
    I've developed a web application using clojure/compojure (JVM based) and while developing I tested it using embedded jetty that runs on 0.0.0.0:8080. I would now like to deploy it to run on port 80 on ubuntu. I do dynamic virtual hosting, so any request for any host that arrives on port 80 should be handled by my application. The issues that worries me are: I can still run it embedded but I'm worried about running my app as root (needed for binding to port 80). I'm not sure if I can 'give up root' when in the JVM. Do I need to be concerned by this? besides, serving web applications is a known problem and I should be using known solutions for this (jetty or tomcat) but especially tomcat seems very heavy weight. Besides, I only have one application that listens to /* and does routing internally. (with compojure/ring). What I'm trying to say with this is that tomcat by default assigns WARs to subfolders which I don't want. So basically what I need is some very safe way of binding to port 80 on ubuntu that can with minimal interference send all requests to my app. Any ideas?

    Read the article

  • Sudden problems with iptables not running

    - by Fourjays
    I've got a sudden issue with iptables not running on my CentOS 5.8/DirectAdmin XenVPS. All I have done today is install PHP APC and run an update (although I admittedly didn't pay much attention today - I usually do). Iptables has been running fairly smoothly since I installed it over 6 months ago. Basically when I try to run iptables -L it tells me: iptables v1.3.5: can't initialize iptables table `filter': iptables who? (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I've looked around and tried a few things and it appears that maybe my kernel doesn't have the modules loaded? I've been reading this and tried the two commands they suggest to no avail. Except there does appear to be a mismatch on one bit of output: -bash-3.2# cd /lib/modules -bash-3.2# ls 2.6.18-194.32.1.el5xen 2.6.18-238.5.1.el5xen 2.6.18-274.7.1.el5xen 2.6.39.1-cs-domU 2.6.18-238.12.1.el5xen 2.6.18-238.9.1.el5xen 2.6.37.2-cs-domU 3.0.1-cs-domU -bash-3.2# depmod -a WARNING: Couldn't open directory /lib/modules/2.6.18-274.18.1.el5xen: No such file or directory FATAL: Could not open /lib/modules/2.6.18-274.18.1.el5xen/modules.dep.temp for writing: No such file or directory Does this mean the versions are out of sync? If so, what are my next steps to getting this fixed? As you can probably tell I am still learning how to manage my server so please be very clear in all advice. Many thanks :)

    Read the article

  • WOL doesn't work if set to anything other than `a` but this setting makes it boot all the time

    - by Elton Carvalho
    I manage a small "cluster" of 4 Xeon machines with Intel boards in my lab. They are all plugged to a 5-port 3-Com switch with static IP addresses like 10.0.0.x. They are all running OpenSuse 11.4 and their /home/ is served by one of the machines (node00) via NFS. They are plugged to an UPS that can keep them on for ca. 15 minutes, but there are lots of electric shortages due to "unscheduled maintenace" that are longer than this. So they end up being powered down without notice. If I set the BIOS to turn them on after power shortages, the issue is that they all boot at the same time and, if node00 decides to run fsck in the /home/ partition, it does not finish booting before the others try to NFS mount their /home/. I am trying to make wake on lan work, so I can choose to boot the NFS clients only after the server has successfully booted. The problem is that when I run ethtool I get an output like this: Supports Wake-on: pumbag Wake-on: g Theoretically, it is set to wake on MagicPacket(tm), according to the manual. But sending the WOL packet using wol -i 10.0.0.255 $MACADDR does not wake up the box after I shut it down with halt. The ethernet link led blinks after I send the packet, so it appears to be getting to the machine. However, if I set it up with ethtool -s eth1 wol bag, the machine always wakes up right after halting, even if I don't send the Magic packet. This means that the device can wake up with LAN activity, but seems to be ignoring the magic packet. Setting wol ag does not wake the box with the MagicPacket. Does setting wol a mean that it should boot with any broadcast message? How can I diagnose the issue of the machine not waking up with the MagicPacket even though I am sending it and it's set up to wake up with it? Thanks in advance!

    Read the article

  • Reliable backup software for windows network/samba shares.

    - by Eli
    Hi All, I have a Win2003 server that works as a pdc for a number of XP boxes, and a couple related FreeBSD boxes. I need to back up roaming profiles, non-roaming profiles via network shares, local hard drive data, and files on the FreeBSD boxes via samba shares. I have tried Genie Backup Manager and Backup4All pro, and both have excellent features, but both also begin to fail disastrously with more than a few days use. Mostly, the errors seem to have been from the backup catalog getting out of synch with itself. Whatever it is, there is no excuse for a backup software that says it backed up files when it really didn't, or the log saying it backed up exactly the same file 10,000 times in a single run, or flat-out crashing, or any of the other myriad problems I've run into with these. Really sad for products that fill such an important need. Anyway, does anyone know of a backup software that works reliably and can do the following? Scheduled backups for multiple jobs, without a user logged in. Backup from local hard drives or network shares. Incremental backups. Thanks! Edit: Selected solution: I've added my (hopefully final) solution as an answer.

    Read the article

  • Redirection of outbound UDP port NTP.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • How to effectively have less php-cgi processes running?

    - by João Pinto Jerónimo
    My server is a Linode 512, and on it I run a Wordpress MU with 3 websites (they don't get a lot of visitors) and a couple of NodeJS apps. I need to switch to Lighttpd because Apache 2 was using about 59% of the server's RAM, and now I have the php-cgi processes taking up about 43.6% of the server's RAM: most often 2 processes use 16.5% of the RAM each, 4 processes use 1.8% of the RAM each, and 4 more processes use 0,8% of the RAM, each How can I have less of these processes ? I'm almost sure they're not all needed for the trafic this server gets... I tried only allowing 2 children, but I still have those 10... This is my fastcgi.server section in lighttpd.conf. fastcgi.server = ( ".php" => ( "localhost" => ( "socket" => "/var/run/lighttpd/php-fastcgi.socket", "bin-path" => "/usr/bin/php-cgi", "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "4000" ) ) ) ) What else can I do to tune lighttpd to use less RAM ?

    Read the article

  • Multi monitor setup with a tv as well?

    - by jasondavis
    I have always had a dual monitor setup for my pc's for years now. I am now in the market to build a new PC and get some new stuff. I need some help or advice on how to run 3 monitors WITH a 4th display which will be a lcd/plasma tv. So I am thinking I can get 2 video cards and that will give me all the hookups to run 3 monitors instead of 2. I will mount all 3 monitors in a row side by side and above them on the wall I would like to mount a larger 30-40+ inch lcd or plasma tv. I would then like to hook up my atelite/cable to this tv just as I would normally hook up a tv but I would like to also be able to have an option to view my PC o this tv as well. I know that is possible but would it be possible to view my PC on that tv and also still view my 3 other monitors and have the TV be a 4th display, where I could dock a different app/window in windows in all 4 displays (3 monitors + tv) ?? Please tell me any tips/advice on how to do this including what cables/software if any/converters/ you name it. Thanks for any help

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Nginx + Ubuntu 9.10, gzip not functioning

    - by Matt
    Hey there, So I installed and configured Nginx 0.7.62 on a new Slicehost Ubuntu 9.10 slice. All seems to work fine with the server, except that gzip isn't working for one reason or another. I made sure that it's setting were correct in /etc/nginx/nginx.conf: user www-data; worker_processes 3; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 2; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript; gzip_disable "MSIE [1-6]\."; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } This normally wouldn't be a big deal, but gzip support could save considerable bandwidth for my site. Does anyone have any ideas of what to check, or has anyone else run into this problem?

    Read the article

  • On Windows machines, what is the typical toolchain for remote maintenance?

    - by Hanno Fietz
    I need to deploy PHP and Python code and the appropriate environment (web server, db server) to remote Windows systems, and I don't know what toolchain would be the equivalent to ssh, scp, bash and the like. So, basically, what I need to be able to do is the following: access remote Windows with the appropriate privileges in a secure manner, like I routinely do with ssh (I don't even know whether that would be a text or graphic interface on Windows). remotely install software: Apache or IIS, MySQL or Postgres, Python or PHP copy files from remote (the application we're deploying) remotely configure the machine to run regular tasks (e. g. checking for updates to the application) automate tasks like downloading files from a designated place The main question is probably how I get onto the machine securely in the first place, and then the rest is general Windows admin knowledge, which probably is too broad a scope to fit into one question. I have years of experience with maintaining Linux boxes and I have used tools of varying sophistication on those, ranging from plain scping of PHP files to deployment of Java application containers and even full VMs with Vagrant. On Windows, I'm a complete noob, and I don't even know where to start. I have installed Apache, MySQL , PHP on a desktop machine maybe twice in my life, that's about it. Bonus points for things that work from a Linux machine at my end, but I could run a VM and do everything from there.

    Read the article

  • Adding Windows 7 32 bit as dual boot option

    - by djerry
    A relative of mine has bought a new laptop this year on which windows 7 (64 bit) is installed. Aside some standard programs he uses on that laptop, he also has some software for his bike that needs to run. The developers of that program still don't support 64-bit systems and therefor I thought about making it dual boot, so he can still use the power of the 64-bit, and just for the bike program, he can initiate the 32-bit version. My questions now are: What are the risks involved in this operation? What steps need to be taken to make this dual boot succesful? Any other ideas besides dual booting? Thanks in advance. Edit I might have forgotten/misphrased something. The software does run on 64-bit, but it cannot find the bike connected to the computer. So I think it's a matter drivers which aren't compatible with the 64-bit system. That's why I wanted to install the 32-bit windows so the drivers would work.

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • Corrupted NTFS Drive showing multiple unallocated partitions

    - by volting
    My external hdd with a single NTFS partition was accidentaly plugged out (kids!)... and is now corrupted. Iv tried running ntfsfix - with no luck - output below.. When I look at the disk under disk management in Windows 7 it shows up as having 5 partitions 2 of which are unallocated - none have drive letters and it is not possible to set any (that option and most others are greyed out) - so I can't run chkdsk /f Iv tried using Minitool partition wizard which was mentioned as a solution to another similar question here. It showed the whole drive as one partition, but as unallocated, and the option -- "Check File System" was greyout. Is there anything else I could try ? Output of fdisk -l Disk /dev/sdb: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytest I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69205244 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 218129509 1920119918 850995205 72 Unknown /dev/sdb2 ? 729050177 1273024900 271987362 74 Unknown /dev/sdb3 ? 168653938 168653938 0 65 Novell Netware 386 /dev/sdb4 2692939776 2692991410 25817+ 0 Empty Partition table entries are not in disk order Output of ntfsfix me@vaio:/dev$ sudo ntfsfix /dev/sdb Mounting volume... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Attempting to correct errors... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error FAILED Failed to startup volume: Input/output error Checking for self-located MFT segment... ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument OK ntfs_mst_post_read_fixup_warn: magic: 0xffffffff size: 1024 usa_ofs: 65535 usa_count: 65534: Invalid argument Record 0 has no FILE magic (0xffffffff) Failed to load $MFT: Input/output error Volume is corrupt. You should run chkdsk. Options available with MiniTool: Related questions: How to fix a damaged/corrupted NTFS filesystem/partition without losing the data on it? Repair corrupted NTFS File System

    Read the article

  • is a wildcard SSL the only option in this multiple VHOST/1IP setup?

    - by solsol
    I have a web app set up that needs the following SSL encryption: secure.myapp.com -> SSL www.myapp.com/login -> SSL www.myapp.com/signup -> SSL If I'm correct, I could run one SSL certificate for my whole www.myapp.com/* pages. The problem is that I have a subdomain called secure.myapp.com that either needs to be on a separate IP address to work with SSL. Right now I have one server, one public IP and a number of Virtual Hosts in apache to make this work. I'd rather not buy an expensive Wildcard SSL certificate to secure just one subdomain. What is your advice on this? If it IS the only solution any tips on getting a price worthy wildcard SSL cert is appreciated. I have read about SNI that allows the use of multiple SSL certs, but not all browsers (IE6!) support this. Since we are building a web app for the public, we cannot have IE6 to run on unencrypted connections. Thanks for you help

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • Can Internet data be used by malware when PC off?

    - by Val
    I have noticed over the last month that my off peak data has been used at a rate of approx 350MB per hour - this has meant that I have gone over my quota and slowed down by my ISP to 256k. There is no one in the house using it (2am-8am is my ISPs off peak hours) at that time. My PC and other wireless devices (ipad and iphone) are turned off. I have changed the wireless password on my modem 3 times and it is now 30 digits long. So I don't think someone else is using my wireless access between 2-8am. It has been suggested by my ISP that I may have malware/spyware on my computer. Sorry for my ignorance, but can malware still run if the PC is off? I did look at my modem's log and followed an IP address to a service called Amazon Simple server Storage. Could this company possibly be the culprit? I am not too tech savvy, so any assistance appreciated. I have run a barrage of spyware cleaning software eg malware bytes; spy bot etc.... Cheers Val

    Read the article

  • Windows 7 ignores F6/F8 and will not boot

    - by P.Brian.Mackey
    I have a work PC with sophos safeguard encryption on it. Windows failed to start. When I bootup I receive an error saying a recent hardware or software change might be the cause. File: \Boot\BCD Status: 0xc0000098 Info: The windows boot configuration data file does not contain a valid OS entry. This began after the PC forced me to run a system recovery. My machine had powered down improperly (power outage?) and simply would not respond to my keyboard input to cancel the option to scan my system. After the scan "repaired" a boot file, my system crashed. Now it tells me I can insert my windows 7 disk and run recovery. I can't simply do this because of Safeguard. The system recovery can't see my encrypted drive. I tried hitting F2 to manually login to Safeguard and then selected the option to boot from media. The computer prompts me to hit any key to boot from disk...which I do, but once again it is not reading my keyboard input. I can't get F8/F6 to bypass startup files and get me to a command prompt like the old days. If I could get to a command prompt I might could recover the file windows jacked up from its backup location...though I may need to use the windows recovery disk UI to do this..??? In the past I've been able to slap in a PS/2 keyboard when the USB keyboards stop responding like this. I have no PS/2 keyboard available. Anyone have any idea how I can undo the damage windows system recovery has done with safeguard installed?

    Read the article

< Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >