Search Results

Search found 42786 results on 1712 pages for 'install from source'.

Page 504/1712 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Hyper-V virtual machine can't be migrated to a specific host in the cluster

    - by Massimo
    I have a three-node Hyper-V cluster running on Windows Server 2008 R2 which is working quite flawlessly: there are no errors, live migration works, all hosts can and will happily run all virtual machines, and so on. But one specific virtual machinee is trying to make me go mad: it works on two nodes of the cluster, but not on the third one. Whenever I try to move the VM to that node, be it in a live migration or with the VM powered off, it always fails. In the event log of the host these events are logged: Source: Hyper-V-VMMS Event ID: 16300 Cannot load a virtual machine configuration: General access denied error (0x80070005) (Virtual machine ID <GUID>) Source: Hyper-V-VMMS Evend ID: 20100 The Virtual Machine Management Service failed to register the configuration for the virtual machine '<GUID>' at 'C:\ClusterStorage\<PATH>\<VM>': General access denied error (0x80070005) Source: Hyper-V-High-Availability Event ID: 21102 'Virtual Machine Configuration <VM>' failed to register the virtual machine with the virtual machine management service. All other VMs can be moved to/from the offending host, and the offending VM can be moved between the other two hosts. Also, this is not a storage problem, because there are other VMs in the same cluster volume, and the host has no troubles running them. What's going on here?

    Read the article

  • Installing CDT on top of JDT: Conflicting Dependency

    - by someguy
    I am trying to install the CDT plugin on top my existing version of Eclipse, which was for Java. The problem is that I got this error message when I tried doing so via "Install New Software...": Cannot complete the install because of a conflicting dependency. Software being installed: Eclipse C/C++ Development Tools 4.0.3.200802251018 (org.eclipse.cdt.feature.group 4.0.3.200802251018) Software currently installed: Eclipse IDE for Java Developers 1.3.1.20100916-1202 (epp.package.java 1.3.1.20100916-1202) Only one of the following can be installed at once: International Components for Unicode for Java (ICU4J) 4.2.1.v20100412 (com.ibm.icu 4.2.1.v20100412) com.ibm.icu 3.6.1.v20070906 Cannot satisfy dependency: From: Eclipse IDE for Java Developers 1.3.1.20100916-1202 (epp.package.java 1.3.1.20100916-1202) To: org.eclipse.epp.package.java.feature.feature.group [1.3.1.20100916-1202] Cannot satisfy dependency: From: Eclipse C/C++ Development Tools 4.0.3.200802251018 (org.eclipse.cdt.feature.group 4.0.3.200802251018) To: com.ibm.icu [3.4.0,4.0.0) Cannot satisfy dependency: From: EPP Java Package 1.3.1.20100916-1202 (org.eclipse.epp.package.java.feature.feature.group 1.3.1.20100916-1202) To: org.eclipse.rcp.feature.group 3.6.0 Cannot satisfy dependency: From: Eclipse RCP 3.6.0.v20100519-9OArFKvFtsd7WLUKh-DcYTS (org.eclipse.rcp.feature.group 3.6.0.v20100519-9OArFKvFtsd7WLUKh-DcYTS) To: com.ibm.icu [4.2.1.v20100412] Cannot satisfy dependency: From: Eclipse RCP 3.6.1.r361_v20100827-9OArFLdFjY-ThSQXmKvKz0_T (org.eclipse.rcp.feature.group 3.6.1.r361_v20100827-9OArFLdFjY-ThSQXmKvKz0_T) To: com.ibm.icu [4.2.1.v20100412] What can I do to solve this?

    Read the article

  • Fixing partitions and Installing BackTrack

    - by Josh
    My whole problem started when I started trying to install Backtrack(3 or 4) Backtrack was trying to install itself over my entire windows partition (Which I had combined into one when I installed windows 7). So I booted back into windows 7 on my netbook (eee pc 1000 HE btw) I went into disk-manager with the aim of making a partition to install backtrack on but came out with a really screwed up drive. So I had two partions when I started: the windows system partition, and then my main partition and they were blue in diskmanager (I think that has something to do with formatting). After I went through the steps to make a 10 GB FAT32 partition for backtrack I had about five partitons one called PE: that I have no Idea what it is the windows system file, my main partiton 10 GB unallocated space, and two other partions under 50MB each that are both unused space. And they were all converted to simple volumes (Green instead of blue). And backtrack still wants to erase my entire drive. Question number 1: How do I get it back to the way it was? Question 2: How to I get backtrack to dual boot on my netbook?

    Read the article

  • Installing ubuntu 12.04, installs but does not boot after it asks me to remove the CD

    - by Randnum
    I'm Trying to install Ubuntu 12.04 on my computer. It had an old copy of Windows 7 on it I tried to reformat the hard drive for a fresh install of Ubuntu but I think I messed up the partitions in some way that prevents it from fully loading. I'm able to complete the install fine and use guided partitioning so it should be happy but when it gets about 90% through at the part that ejects the cd and restarts the system it fails. After ejecting the CD and restarting it just loads up the bios lenovo splash screen then purple then black. I can hear a sound from my speakers like some notification sound but there is no text on my screen. I've since gone back in under Rescue System to try and reconfigure the partitions hoping that it will fix it and i've tried several combinations. Currently it's SCST1 (0,0,0) (sda) - 500.1 GB ATA WDC WD5000AAkKX-0 #1 100.0 MB K biosgrub #2 494.1 GB B K ext4 / #3 5.9 GB F swap swap 8.2 kb FREE SPACE I'm not sure if I need to set the ext4 to contain the boot flag but if I don't include at least one partition with the boot flag enabled it complains saying that "The partition table format in use on your disks normally requires you to create a separate partition for boot loader code. This partitionshould be marked for use as an "EFI boot Partition" and should be at least 35 MB in size. Note that this is not hte same as a partition mounted on /boot" Like I said it seems to have installed all of the actual data from the CD it's just not properly booting for some reason

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • BSOD trying to migrate Windows XP from a physical to a virtual machine

    - by pauldoo
    I am attempting to migrate a Windows XP Home installation from a physical machine to a virtual machine. The physical machine has two hard disks; the first is 250GB containing the "C:", the second is 1TB containing "D:". I'd like to create a new virtual machine stored on the D:, which is a copy of the Windows XP Home installation that is currently on the C:. (This will leave the 250GB drive clear for me to install a fresh copy of Windows 7, and still be able to access the old XP installation if necessary.) The first method I tried was to follow the instructions here: http://www.virtualbox.org/wiki/Migrate_Windows I booted up from an Ubuntu Live CD in order to execute the Linux commands whilst the Windows system wasn't running. With this method the virtual machine would always blue screen on startup with a "STOP 0x0000007B" message. The instructions above say to try a "repair install" using the Windows XP disc. Unfortunately for me my XP disc is scratched and will not boot so I was unable to try a repair install. The second method I tried was to use "VMWare Converter Standalone Client". This tool executed without any errors, but again produced a virtual machine that blue screens on startup with the same "STOP" message. Are there any other methods to move the Windows XP installation into a virtual machine? I think next I will try some more manual process to create the cloned virtual machine. I think I will try installing a fresh copy of Windows XP to a virtual machine, then once that is booting OK I will ntfsclone the source C: partition over the top. Perhaps this will fix the booting problems if the issue is related to the MBR or partition table in some way.

    Read the article

  • yum error when installing memcached

    - by Jack
    Hi, trying to install memcached with "yum install memcached" and i'm getting all these errors which I have no idea how to solve. Setting up Install Process Resolving Dependencies -- Running transaction check --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Processing Dependency: libevent-1.1a.so.1()(64bit) for package: memcached -- Running transaction check --- Package compat-libevent-11a.x86_64 0:3.2.1-1.el5.rf set to be updated --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Finished Dependency Resolution memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Socket) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Handle) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(YAML) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(Term::ReadKey) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) Packages skipped because of dependency problems: compat-libevent-11a-3.2.1-1.el5.rf.x86_64 from rpmforge memcached-1.4.5-1.el5.rf.x86_64 from rpmforge The perl modules that its complaining about are already installed. Any ideas?

    Read the article

  • Virtual Machines and Automatic Software Updates

    - by Zian Choy
    It's obvious that one's main computer should always be have all the latest security patches and most people don't blink an eye when Microsoft Update installs non-security updates. In the land of virtual machines, I've run into 2 problems with automatic updates: The virtual machines are only run when needed. Only Windows virtual machines seem to patch themselves. To elaborate on #1, I generally make a virtual machine with a purpose in mind. For example, when I needed an old copy of Internet Explorer to reproduce a bug in RSS Bandit, I had a Virtual PC named RSS Bandit. The machine only stayed running for a few minutes at a time. Consequently, there is no downtime for the machine to download updates at 3 AM. To elaborate on #2, I've noticed that if I haven't run a Windows virtual machine in a while, then the moment I log in, the computer frantically downloads updates and within seconds, if I click the Start button, there is a little orange shield next to the "Shutdown" button. However, I ran a freshly created Ubuntu VM for several hours today with hundreds of updates pending and it seemed to never download any of them or install any of them. Is there any reason to be concerned about running VMs with dozens of security holes? If I should be concerned, then is there any way to get Ubuntu to download and install updates rather than just advertising a long list of updates to download next century? I've already tried telling Ubuntu to automatically download and install updates.

    Read the article

  • Dependency issue installing PostGIS on CentOs 6.3

    - by Nyxynyx
    I am new to linux and is trying to install PostGIS2 after successfully installing PostgreSQL 9.1. The machine is running CentOS 6.3 and has cPanel installed. Problem: When I tried installing PostGIS using yum: yum install postgis2_91 postgis2_91-utils, I get the dependency error below. How should I solve this dependency problem and install PostGIS? Thank you so much! --> Finished Dependency Resolution Error: Package: postgis2_91-utils-2.0.1-1.rhel6.i686 (pgdg91) Requires: perl-DBD-Pg Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdapserver.so.7 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdap.so.11 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libgeotiff.so.1.2 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libnetcdf.so.6 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdapclient.so.3 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libhdf5.so.6 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: librx.so.0 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libogdi.so.3 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libcfitsio.so.0 You could try using --skip-broken to work around the problem ** Found 6 pre-existing rpmdb problem(s), 'yum check' output follows: bandmin-1.6.1-5.noarch has missing requires of perl(bandmin.conf) bandmin-1.6.1-5.noarch has missing requires of perl(bmversion.pl) bandmin-1.6.1-5.noarch has missing requires of perl(services.conf) exim-4.77-1.i386 has missing requires of perl(SafeFile) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 sendmail-cf-8.14.4-8.el6.noarch has missing requires of sendmail = ('0', '8.14.4', '8.el6') Update An error still remains: Error: Package: postgis2_91-utils-2.0.1-1.rhel6.i686 (pgdg91) Requires: perl-DBD-Pg You could try using --skip-broken to work around the problem ** Found 6 pre-existing rpmdb problem(s), 'yum check' output follows: bandmin-1.6.1-5.noarch has missing requires of perl(bandmin.conf) bandmin-1.6.1-5.noarch has missing requires of perl(bmversion.pl) bandmin-1.6.1-5.noarch has missing requires of perl(services.conf) exim-4.77-1.i386 has missing requires of perl(SafeFile) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 sendmail-cf-8.14.4-8.el6.noarch has missing requires of sendmail = ('0', '8.14.4', '8.el6')

    Read the article

  • Upgrading Visio 2000 to Visio 2007

    - by dirtside
    I have Microsoft Visio 2000 SR 1, and recently purchased Microsoft Office Visio Standard 2007 with the understanding (supported by the product info and some other research) that I'd be able to upgrade. However, when I install 2007, it tells me it can't find a previous install of Visio, but... it's right there! Here's the exact message: "Setup can't find a version of Microsoft Office on your computer. If Office is installed on a disk or network share, click the browse button to select the appropriate disk or share... (etc.)" No matter which directory or drive I pick (various Office installs, the old Visio install, various subdirectories) it gives the following message: "The path you have chosen does not point at a qualifying upgradeable product. Click 'Retry' to try again or 'Cancel' to quit setup." Any ideas? This is a legit copy of Visio 2007 (purchased from Amazon) and the copy of Visio 2000 is legit as well. I'm not sure what exactly the installer is looking for that it would consider a "qualifying upgradeable product". A specific file?

    Read the article

  • syslog-ng and nging logs to mysql

    - by Katafalkas
    So couple of days ago I asked how to log php and nginx logs to centralized MySQL database, and m0ntassar gave a perfect answer :) cheer ! The problem I am facing now is that I can not seem to get it working. syslog-ng version: # syslog-ng --version syslog-ng 3.2.5 This is my nginx log format: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; syslog-ng source: source nginx { file( "/var/log/nginx/tg-test-3.access.log" follow_freq(1) flags(no-parse) ); }; syslog-ng destination: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("superpasswd") database("syslog") table("nginx") columns("remote_addr","remote_user","time_local","request","status","body_bytes_sent","http_ referer","http_user_agent","http_x_forwarded_for") values("$REMOTE_ADDR", "$REMOTE_USER", "$TIME_LOCAL", "$REQUEST", "$STATUS","$BODY_BYTES_SENT", "$HTTP_REFERER", "$HTTP_USER_AGENT", "$HTTP_X_FORWARDED_FOR")); }; MySQL table for testing purposes: CREATE TABLE `nginx` ( `remote_addr` varchar(100) DEFAULT NULL, `remote_user` varchar(100) DEFAULT NULL, `time` varchar(100) DEFAULT NULL, `request` varchar(100) DEFAULT NULL, `status` varchar(100) DEFAULT NULL, `body_bytes_sent` varchar(100) DEFAULT NULL, `http_referer` varchar(100) DEFAULT NULL, `http_user_agent` varchar(100) DEFAULT NULL, `http_x_forwarded_for` varchar(100) DEFAULT NULL, `time_local` text, `datetime` text, `host` text, `program` text, `pid` text, `message` text ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now first thing that goes wrong is when I restart syslog-ng: # /etc/init.d/syslog-ng restart Stopping syslog-ng: [ OK ] Starting syslog-ng: WARNING: You are using the default values for columns(), indexes() or values(), please specify these explicitly as the default will be dropped in the future; [ OK ] I have tried creating a file destination and it all works fine, and then I have tried replacing my destination with: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("kosmodromas") database("syslog") table("nginx") columns("datetime", "host", "program", "pid", "message") values("$R_DATE", "$HOST", "$PROGRAM", "$PID", "$MSGONLY") indexes("datetime", "host", "program", "pid", "message")); }; Which did work and it was writing stuff to mysql, The problem is that I want to write stuff to in exact format as nginx log format is. I assume that I am missing something really simple or I need to do some parsing between source and destination. Any help will be much appreciated :)

    Read the article

  • How to tell if Microsoft Works is 32 or 64 bit? Please Help!

    - by Bill Campbell
    Hi, I am trying to convert one of our apps to run on Win7 64 bit from XP 32 bit. One of the things that it uses is Excel to import files. It's a little complicated since it was using Microsoft.Jet.OLEDB.4.0 (Excel). I found Office 14 (2010) has a 64bit version I can download. I downloaded Office 2010 Beta but it didn't seem to install Microsoft.ACE.OLEDB.14.0. I found that I could download 2010 Office System Driver Beta: Data Connectivity Components which has the ACE.OLEDB.14 in it but when I try to install it, the installed tells me "You cannot install the 64-bit version of Access Database engine for Microsoft Office 2010 because you currently have 32-bit Office products installed". How do I determine what 32bit office products this is reffering to? My Dell came with Microsoft Works installed. I don't know if this is 32 or 64 bit. Is there anyway to tell? I don't want to uninstall this if it's not the problem and I'm not sure what else might be the problem. Any help would be appreciated! thanks, Bill

    Read the article

  • KVM Guest with NAT + Bridged networking

    - by Daniel
    I currently have a few KVM Guests on a dedicated server with bridged networking (this works) and i can successfully ping the outside ips i assign via ifconfig (in the guest). However, due to the fact i only have 5 public ipv4 ip addresses, i would like to port forward services like so: hostip:port - kvm_guest:port UPDATE I found out KVM comes with a "default" NAT interface, so added the virtual NIC to the Guest virsh configuration then configured it in the Guest, it has the ip address: 192.168.122.112 I can successfully ping 192.168.122.112 and access all ports on 192.168.122.112 from the KVM Host, so i tried to port forward like so: iptables -t nat -I PREROUTING -p tcp --dport 5222 -j DNAT --to-destination 192.168.122.112:2521 iptables -I FORWARD -m state -d 192.168.122.0/24 --state NEW,RELATED,ESTABLISHED -j ACCEPT telnet KVM_HOST_IP 5222 just hangs on "trying" telnet 192.168.122.112 2521 works [root@node1 ~]# tcpdump port 5222 tcpdump: WARNING: eth0: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 23:43:47.216181 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445777813 ecr 0,sackOK,eol], length 0 23:43:48.315747 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445778912 ecr 0,sackOK,eol], length 0 23:43:49.415606 IP 1.152.245.247.51183 > null.xmpp-client: Flags [S], seq 1183303931, win 65535, options [mss 1400,nop,wscale 3,nop,nop,TS val 445780010 ecr 0,sackOK,eol], length 0 7 packets received by filter 0 packets dropped by kernel [root@node1 ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state NEW,RELATED,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination All help is appreciated. Thanks.

    Read the article

  • Configuring vsftpd with nginx on Ubuntu 12.04 LTS

    - by arby
    I've attempted to configure a nginx / vsftpd server on Ubuntu 12.04 LTS (via amazon ec2) a couple times now, but I seem to keep making a mistake along the way. Currently, when I try to connect to my ftp server it takes a minute or so before it connects. Then when I issue a command, they all timeout with an operation failed error. Aside from these issues, I'm not completely confident with the file ownership & permissions or the configuration / settings. So, I think it's best if I just re-install and re-configure correctly. I believe the nginx installation comes with a default user of www-data:www-data and web root directory ownership by root:root. Vsftpd, however, needs to have a user created with the same group as the nginx user (www-data), and the same home directory as the nginx server (/usr/share/nginx/www), with g+w chmod permissions granted on that directory. The vsftpd.conf file should disable anonymous logins and enable local logins, file writing, and chroot local users. In my previous config, I had /bin/false set for the ftp user's shell and pam_shells.so disabled. I also had local_umask set to 0027. So, starting with a fresh ec2 instance, I've got: sudo apt-get install vsftpd sudo apt-get install nginx For the firewall I issued the command (not sure if necessary): sudo ufw allow ftp Which commands / config is recommended from here? I only need 1 ftp user that I can use to login with my ftp client to modify the single nginx web domain, which will need php & sql for WordPress.

    Read the article

  • GTX 280 purple snow on bootup, card works without drivers

    - by Brokar
    i have owned a ASUS GTX280 for 3 years now. The card has been great all along but i started having problems 10 days ago. I was playing Diablo 3 for 1 week on max settings no problems, then suddenly my display kept getting some weird purple/colours as soon as i booted and logged into windows. Went into safe mode, updated drivers and it kept crashing. Formatted PC, fresh windows install with new WHQL drivers again same problem. Uninstalled nvidia drivers and pc has been running great for 4 days now, ofcourse i cannot run games but everything works on 1680x1050 resolution and i can browse internet,watch movies and use my PC for everything but gaming. As soon as i install nvidia drivers PC won't boot. I only wanna game a few hours a week (very busy program with school this month so it might be a blessing that i cannot game) and i would love it if i could keep the card. I am looking to upgrde later on when i will have time for gaming but i wonder if i could still use the card somehow with different/new drivers (tried older drivers that came with the card on a CD aswell) tldr: PC works fine with no nvidia drivers (apart from gaming ofc). Once i install WHQL drivers or older ones, cannot even log into windows. Fix?

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • Puppet and Vim fighting over Ruby version

    - by devians
    I have installed puppet from the .dmg from puppetlabs. If I remove ruby 1.9.3, puppet works, but other things like my vim install (dependant plugins) do not. According to http://docs.puppetlabs.com/guides/platforms.html#ruby-versions 1.9.3 is supported. So whats going wrong with puppet? % uname -a Darwin Kusanagi.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64 % which ruby /usr/local/bin/ruby % ruby --version ruby 1.9.3p327 (2012-11-10 revision 37606) [x86_64-darwin11.4.2] % /usr/bin/ruby --version ruby 1.8.7 (2012-02-08 patchlevel 358) [universal-darwin11.0] % brew info ruby 1 ? ruby: stable 1.9.3-p327, HEAD http://www.ruby-lang.org/en/ Depends on: pkg-config, readline, gdbm, libyaml /usr/local/Cellar/ruby/1.9.3-p327 (796 files, 17M) * https://github.com/mxcl/homebrew/commits/master/Library/Formula/ruby.rb ==> Options --with-tcltk Install with Tcl/Tk support --with-suffix Suffix commands with "19" --universal Build a universal binary --with-doc Install documentation ==> Caveats NOTE: By default, gem installed binaries will be placed into: /usr/local/Cellar/ruby/1.9.3-p327/bin You may want to add this to your PATH. % puppet /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- puppet/util/command_line (LoadError) from /usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /usr/bin/puppet:3:in `<main>'

    Read the article

  • Debian 5.0 (lenny) apt sources fail?

    - by Tronic
    For the past few days, I couldn't update our apt-sources on Debian 5.0 (lenny). I get the following errors. W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/main/binary-amd64/Packages 404 Not Found [IP: 130.89.148.12 80] W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/contrib/binary-amd64/Packages 404 Not Found [IP: 130.89.148.12 80] W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/non-free/binary-amd64/Packages 404 Not Found [IP: 130.89.148.12 80] W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/main/source/Sources 404 Not Found [IP: 130.89.148.12 80] W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/contrib/source/Sources 404 Not Found [IP: 130.89.148.12 80] W: Failed to fetch http://ftp.debian.org/debian/dists/lenny/non-free/source/Sources 404 Not Found [IP: 130.89.148.12 80] How do I fix this problem? Edit: My current sources are: # Debian Lenny deb http://ftp.de.debian.org/debian/ lenny main non-free contrib deb-src http://ftp.de.debian.org/debian/ lenny main non-free contrib # Debian Lenny Non-US deb http://non-us.debian.org/debian-non-US lenny/non-US main contrib non-free deb-src http://non-us.debian.org/debian-non-US lenny/non-US main contrib non-free # Debian Lenny Security deb http://security.debian.org/ lenny/updates main contrib non-free

    Read the article

  • XP CD doesn't offer repair option

    - by SLaks
    I'm fixing an IBM Thinkpad laptop running XP Pro which doesn't boot all the way (It gets past the XP logo boot screen, a movable mouse cursor appears, and it doesn't get any further, even in safe mode) after being bumped a bit. I'd like to do a repair install. I booted it to an XP Pro CD, but the Repair install option (not recovery console) doesn't appear. After pressing F8 to accept the EULA, it says, Loading setupp.ini, then immediately goes to a partition list (it never says Searching for previous installations of Microsoft Windows). If I select the partition, it warns me that there is already a Windows installation in that partition, and that it will be completely obliterated if I continue. (So I know that it does see the contents of the hard disk) I booted the same CD in an XP virtual machine, and it offered to repair the XP installtion in the virtual machine, so the problem isn't with the CD. Does anyone know how make it do a repair install (or have any other ideas to solve the problem?) It might not show up because it's an OEM installation (but not an OEM CD), but that's just a guess.

    Read the article

  • Mac Port error installing gsoap

    - by Kevin
    Hi All, I have installed Mac Ports V1.8.1 no worries. I ran sudo port -v selfupdate no worries. I ran sudo port install gsoap And get the following error message. --- Computing dependencies for gsoap --- Fetching gsoap --- Attempting to fetch gsoap_2.7.13.tar.gz from http://optusnet.dl.sourceforge.net/gsoap2 --- Verifying checksum(s) for gsoap --- Extracting gsoap --- Applying patches to gsoap --- Configuring gsoap Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_gsoap/work/gsoap-2.7" && ./configure --prefix=/opt/local --enable-samples " returned error 77 Command output: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking build system type... i386-apple-darwin10.2.0 checking host system type... i386-apple-darwin10.2.0 checking whether make sets $(MAKE)... (cached) no checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Error: Status 1 encountered during processing. Any ideas as to why it is failing. Regards Kevin

    Read the article

  • Generic RPM package for Python 2.x

    - by RaphDG
    I have a python application, it can run on Python = 2.6 and it's architecture independant. I need the rpm package of this application to be installed on Fedora 14 (python 2.7) and Centos 6.2 (python 2.6). I currently use mock to build one rpm package for each "flavour" and it works well. I apparently can't install the Centos compiled rpm on Fedora. It gives me this error message : error: Failed dependencies: python(abi) = 2.6 is needed by myapp-0.9.el6.noarch Here is the relevant part of my .spec file : %{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} %{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")} Name: myapp Version: #VERSION# Release: #RELEASE#%{dist} Summary: myapp Group: Development/Languages License: Apache v2 Source0: %{name}-%{version}-#RELEASE#.tar.gz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) BuildArch: noarch BuildRequires: python-devel BuildRequires: python-setuptools %description myapp %prep %setup -c %build %{__python} setup.py build %install %{__rm} -rf %{buildroot} %{__python} setup.py install -O1 --skip-build --root %{buildroot} Do I really have to use mock and build 2 rpms or is there another way to create a single generic 2.x rpm package ?

    Read the article

  • Display stretches 4:3 ratios; Adds scrolling to other ratios

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks EDIT: After playing around more, I discovered that the "scrolling" behavior is only present for aspect ratios that are not 4:3. For 4:3 ratios, it just stretches out to fit the wide screen. My monitor's native ratio is 16:9 .. what could be causing it to think it needs to scroll?

    Read the article

  • How do I use a self encrypting drive?

    - by Unique_Key
    I recently purchased a Micron RealSSD c400 self encrypting drive, and I am having a few issues when trying to get it recognized by my laptop (HP Elitebook 8440p running Windows 7 x64; also tried on a custom-built desktop). When I try to initialize the drive from disk management, I get a CRC error; also, when attempting to partition it from Windows setup, the program can't create the partitions. I also tried with UBCD, nothing. I assume this is due to drive security, but I haven't been able to find much information about this online; do I need a management software or something? I'm completely stumped here. EDIT As requested, when I try partitioning the device from Windows setup I get a 0x80300024 error; when I try initializing it from disk management, I get a "Data error (cyclic redundancy check)" message, and the event log shows the following under System: Source: VDS Basic Provider, message: unexpected failure. error code 490@01010004 (2x) Source: Virtual Disk Service, message: VDS fails to write boot code on a disk during clean operation. Error code: 80070001@02070008 (1x) Source: Disk, message: The device \Device\Harddisk2\DR2 has a bad block (2x) The security logs show nothing related. Also, when attempting to configure it from UBCD (utility: HDAT2), I get an error along the lines of "can't edit partition information" or something to that tune.

    Read the article

  • how does svn work with apache?

    - by ajsie
    i've got ubuntu installed with lamp. im using webdav to upload/download files to/from the ubuntu web server, after i have edited the php source files in netbeans. however, i wonder what is best practice for editing source files and committing these changes to the new website. cause if we are 2-3 developers, i guess we have to use svn. but i have never used it before so i wonder how it works. should i install it and then select the /var/www (apaches webroot) as the repository folder? then when i check in, all the changes will apply immediately? could someone please explain following steps: how to download, edit the source files, upload the files and see the new changes in the website. cause i have only worked with a local apache before, and it was only me. now there will be some more programmers so i have to set up a decent, central environment for this, and have to know how netbeans, svn, webdav and apache works all together. thanks!

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >