Search Results

Search found 14443 results on 578 pages for 'virtual keyboard'.

Page 507/578 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • JVM disappeared on Mac OS X Snow Leopard 10.6.8

    - by weisjohn
    I'm working in Eclipse one night, (also using Android's DDMS from the commandline). The next morning, I open the lid... attempt to run Eclipse and get an error. me$ sudo /Applications/eclipse/eclipse JavaVM: requested Java version ((null)) not available. Using Java at "" instead. JavaVM: Failed to load JVM: /bundle/Libraries/libserver.dylib So I then attempt to find out where my JDKs are pointed: me$ ls -la /System/Library/Frameworks/JavaVM.framework/Versions/ total 64 drwxr-xr-x 12 root wheel 408 Nov 16 10:44 . drwxr-xr-x 12 root wheel 408 Sep 7 09:39 .. lrwxr-xr-x 1 root wheel 5 Sep 7 17:07 1.3 -> 1.3.1 drwxr-xr-x 3 root wheel 102 Dec 2 2009 1.3.1 lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4.2 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5.0 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.6 -> CurrentJDK drwxr-xr-x 9 root wheel 306 Nov 16 10:44 A lrwxr-xr-x 1 root wheel 1 Sep 7 17:07 Current -> A lrwxr-xr-x 1 root wheel 59 Sep 7 17:07 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents Everything looks normal so far... me$ ls -la /System/Library/Java/JavaVirtualMachines/ total 0 drwxr-xr-x 2 root wheel 68 Nov 16 10:44 . drwxr-xr-x 5 root wheel 170 Nov 16 10:44 .. Apparently, my virtual machines have been deleted or moved? I'll probably be able to just re-install Java, but does anyone have any insight into why this may have happened or how to prevent in the future?

    Read the article

  • New Static Website with Hosted DNS alternating 502, 503 and Page Does Not Exist Errors

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • Win 2003 SBS - secure enough by default?

    - by Pekka
    I have to set up a Windows 2003 Small Business Server to work as a Subversion repository and possibly as an E-Mail server later. The machine is a virtual one, hosted with a hosting company, and freshly initialized. I used the Security Configuration Wizard to deactivate all server roles. After I install Subversion, I will open the necessary ports for the service; in addition, obviously, RDP will stay open so I can remote control the machine. Automatic updates are activated, and I will set up E-Mail notification every time somebody logs on to the server. I'm a programmer and not a professional systems administrator, so I would like to know whether you would regard this a sane and secure setup for a (publicly available) box to host sensitive code and/or E-Mail on. Is there anything in addition I should do to make the machine secure? Is there anything I can do on a long-term basis to keep the machine secure, apart from monitoring the event log (as far as I can make sense out of it), and seeing that any hotfixes are installed properly?

    Read the article

  • PHP displays blank white page even with all error reporting enabled

    - by Andy Shinn
    I am trying to debug a broken page in a Drupal application and am having a hard time getting PHP to spit anything useful out. I have the following set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On error_log = /var/log/php/php_error.log I have a file showing me phpinfo() which confirms these variables are set correctly for the environment. I have increased memory_limit to 256M (which should be more than enough). Yet, the only indication I get is a status 500 code in the apache access log and a blank white page from PHP. The Apache virtual host has LogLevel set to debug and the error log only outputs: [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 0 to 2 : URL /index.php, referer: http://ec2-174-129-192-237.compute-1.amazonaws.com/admin/reports/updates [Sat Jun 16 20:03:03 2012] [error] [client 173.8.175.217] File does not exist: /var/www/favicon.ico [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 42 to 44 : URL /favicon.ico The PHP error log outputs nothing at all. kernel and syslog show nothing related to Apache or PHP. I have also tried installing suphp and checking its log just confirms the user is executing correctly: [Sat Jun 16 20:02:59 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 [Sat Jun 16 20:05:03 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 This is on Ubuntu 12.04 x86_64 with the following PHP modules: ii php5 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.3.10-1ubuntu3.1 command-line interpreter for the php5 scripting language ii php5-common 5.3.10-1ubuntu3.1 Common files for packages built from the php5 source ii php5-curl 5.3.10-1ubuntu3.1 CURL module for php5 ii php5-gd 5.3.10-1ubuntu3.1 GD module for php5 ii php5-mysql 5.3.10-1ubuntu3.1 MySQL module for php5 So, what am I missing here? Why no error reporting?

    Read the article

  • Home Server: storage virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on storage management. (I have another question related to the VM/compute instance management). Here my environement and wishes. Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) with 1 250GB system disk and up to 4 HDD (2 TB) for "data" OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine The 4 HDD is going to be a software RAID array, probably RAID 5. storage should be "virtualised/cloudified": easy to extend: if I add a NAS on the network, I can include the NAS space capacity within this storage space as one virtual disk. This can be a NAS, an external HDD or another server. cluster FS or S3 style space or OpenStack block storage? Whatever is easier to manage/maintain and easy to integrate/plug to VM/compute instance. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. Note: the VMs I intend to run on top of this server are one dedicated to backup, one for a "owncloud/dropbox"-like service and perhaps one for media server (hosting video and photos). I'm not sure if traditional VMs or compute instance are the most suitable for this.

    Read the article

  • apache2 server running ruby on rails application has go daddy cert that works in chrome/firefox and ie 9 but not ie 8

    - by ryan
    I have a rails application up on a linode ubuntu 11 server, running apache2. I have a cert purchased from godaddy, (where we also bought our domain) and the cert is installed on my server. Part of my virtual host file: ServerName my_site.com ServerAlias www.my_site.com SSLEngine On SSLCertificateFile /path/my_site.com.crt SSLCertificateKeyFile /path/my_site.com.key SSLCertificateChainFile /path/gd_bundle.crt The cert works fine in Chrome, FireFox and IE 9+ but in IE 8- I get this error: There is a problem with this website's security certificate. The security certificate presented by this website was issued for a different website's address. I'm hosting multiple rails apps on this same server (4 right now plus some old php sites that don't need ssl). I have tried googling every possible combination of the error/situation that I could think of but at this point I'm shooting in the dark. The closest I could come up with is that some versions if IE don't support SNI. But that doesn't apply here because I am getting the warning on windows 7 machines running IE 8, and the SNI only seemed to apply to IE 8 if the operating system was windows XP. So why is this cert being accepted by all browsers but giving me a warning in IE 8? Edit: So doing a little more digging and I figured out some more. It turns out this is effecting IE 9 as well. However the problem seems to be that IE is not traversing the ssl chain to get to the right cert. FireFox and Chrome when I go to view certificate show the correct one, but IE is showing one of our other sites certificates. REAL QUESTION HERE: That being the case why is IE not getting the right certificate when others are and how do I fix it?

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • trouble running multiple domains on tomcat behind apache via mod_jk

    - by mkoryak
    I am having trouble setting up tomcat6 with 2 virtual hosts, behind apache2. if i have just one host defined in tomcat, and one jk worker, everything works fine. as soon as i define another jk worker and a corresponding tomcat host i get this error in jk.log: 9:3075328656] [info] ajp_connect_to_endpoint::jk_ajp_common.c (922): Failed opening socket to (69.164.218.75:8009) (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_send_request::jk_ajp_common.c (1507): (dogself) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] ajp_service::jk_ajp_common.c (2447): (dogself) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2) [Tue Feb 08 03:08:13 2011] [17159:3075328656] [error] ajp_service::jk_ajp_common.c (2466): (dogself) connecting to tomcat failed. [Tue Feb 08 03:08:13 2011] [17159:3075328656] [info] jk_handler::mod_jk.c (2615): Service error=-3 for worker=dogself my tomcat server.xml looks like this: <Service name="Catalina"> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="dogself.com"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="dogself.com" appBase="webapps-dogself" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> <Host name="nousophia.com" appBase="webapps-test" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> my workers.properties looks like this: # workers.properties - ajp13 # # List workers worker.list=dogself,nousophia # Define dogself worker.dogself.port=8009 worker.dogself.host=dogself.com worker.dogself.type=ajp13 worker.nousophia.port=8009 worker.nousophia.host=nousophia.com worker.nousophia.type=ajp13 tomcat is started/restarted i followed these directions for setting it up: http://stackoverflow.com/questions/1765399/linking-apache-to-tomcat-with-multiple-domains can someone confirm that it would work as above?

    Read the article

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

  • site to listen on port 88

    - by JohnMerlino
    I want to get one of my sites to listen on port 88. In ports.conf in /etc/apache2 on ubuntu server, I add so web app can listen on port 88: NameVirtualHost *:80 Listen 80 NameVirtualHost *:88 Listen 88 I have this in my etc/apache2/apache2.conf, I have this: # Include the virtual host configurations: Include sites-enabled/ Under sites enabled, I have a file looks like this: Listen *:88 NameVirtualHost *:88 <VirtualHost *:88> ServerName dogtracking.com DocumentRoot /home/doggps/public_html/eaglegps.com/current/public <Directory /home/doggps/public_html/eaglegps.com/current/public> AllowOverride all Options -MultiViews </Directory> <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> </VirtualHost> Then I try to restart apache: /etc/init.d/apache2 restart And I get: * Restarting web server apache2 /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:21 2012] [warn] NameVirtualHost *:88 has no VirtualHosts /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:22 2012] [warn] NameVirtualHost *:88 has no VirtualHosts (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Action 'start' failed.

    Read the article

  • How do you initialize networking on a new Xen guest VM?

    - by Marten Veldthuis
    We have a Citrix XenServer setup, and while I personally lean more towards Dev than Ops, I've got an issue that's been bugging me. When you provision a new (Linux/Ubuntu) guest, how do you get it to have the correct IP-address? I'd want my application servers to exist in the range of 10.20.0.0/24, preferably being .1, .2, etc, so I can keep my sanity. I guess that the actual IP-address is something set in Linux itself, and Xen can't touch that, but then what's the best practice for getting it done? If you set up DHCP, don't you just move the problem to getting the adapters the "correct" MAC-addresses? Do you just have to hardcode a large table of MAC-addresses to IP-addresses, and then provision new guests always with the correct MAC-address on the virtual ethernet adapter? What we currently do is have an image of a "app server" that we boot up a new instance of, and then finalize it (with a script) that (among other things) modifies the /etc/networking/interface file to give it the correct IP. But that feels dirty to me, and I feel like surely there must a better way. Please enlighten me?

    Read the article

  • Why my Ldirectord check multiple times on read server every interval?

    - by garconcn
    I have a Ldirectord server and two real servers. My ldirectord used to check the request page on real server once in every interval, but now I found that it check four times. I have monitored the log on both real servers, they have the same problem. Here is my ldirectord configuration: checktimeout=10 checkinterval=5 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no virtual=192.168.1.100:80 fallback=127.0.0.1:80 real=192.168.1.10:80 gate real=192.168.1.20:80 gate service=http request="lb.html" receive="still alive" scheduler=sh persistent=60 protocol=tcp checktype=negotiate Ldirectord will connect to each real server once every 5 seconds (checkinterval) and request 192.168.0.10:80/test.html (real/request). The access log in real server: 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805"

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

  • Which AMI to to use for Java/Tomcat/MySQL in Amazon EC2?

    - by Justin
    I originally posted this on stackoverflow.com and it was suggested serverfault.com might be a better place to ask this question. So here goes: I'm trying to determine which Amazon Machine Image (AMI) to use as my Virtual Server in Amazon's EC2. For now, I'll need to choose an AMI that complies with the AWS Free Usage Tier. I want to deploy a Java app that I've been developing using Eclipse on Windows XP, Tomcat 7 and MySQL 5.5. I'm aware that I can choose the Basic 32-bit Amazon Linux AMI. Then I'd manually install Tomcat and MySQL (does MySQL get installed on the image or separately on an Elastic Block Store (EBS)?). Here's the rub, I'm a bit of a Linux noob. I can start Tomcat and tail the logs and such on Linux but I'm not familiar with the install process for Tomcat and MySQL on Linux and commands like sudo and chmod. I'm happy to get more hands on with Linux but I'm short on time right now. Are there AMI's that already have Tomcat and MySQL bundled? The Request Instance Wizard shows 805 Community AMI's that are Free Tier Eligible. 51 of the Free Tier Eligible AMI's have "Tomcat" in their name. I'm willing to consider using Elastic Beanstalk but my research thus far hasn't found any discussion of using MySQL with Beanstalk. The discussions all seem to use Amazon's SimpleDB. Any advice is greatly appreciated.

    Read the article

  • master-slave datastore replication, automatic failover, and wackamole

    - by z8000
    I have 2 dedicated servers provisioned for my next project's datastores. The datastores are configured for master-slave replication. There's no inherent automatic failover but I of course want this. That is, I'd love for access to the master datastore to always just work without having to configure a client library to detect when a master is down and failover to the slave. I've seen Wackamole which is based on the Spread Toolkit. You provide Wackamole with a set of IPs and a bunch of nodes, and regardless of the up/down state of any of the nodes, those IPs will stay available/up. Wackamole detects when a node goes down and ARPs the IP(s) that were up on the now-down node. It's pretty neat actually. So, my thought was to use Wackamole to keep the 2 virtual private IPs available/up. Clients would then just always use the same private IP to access the master datastore and the same but distinct IP for the slave datastore, even if those IPs were hosted on the same node. My datastore servers are accessed over a private network. I am unsure if this messes with Wackamole though. Is this lunacy? How do you generally handle automatic failover of private services like a datastore. FWIW, it shouldn't matter but the datastore is Redis. I don't want to hear "use mySQL" please :) Thanks.

    Read the article

  • Does the OSS Backup Solution amanda.org support sparse files?

    - by user97961
    I want to (or better have to) do Backups of my KVM Virtual Machine images. I have searched for days for a good Backup Soloution. I know amanda is a very good solution. It would be kinf if someone kenn tell me if the following is supported: Trigger the Creation of LVM Snapshot (by invoking a Shell Script that I will write for that purpose) Do a Differential/Delta Backup on my KVM LVM qcow2 sparse file. = I only want to copy the actually changed bits/bytes (=Delta Backup). And it has to support that the file to be backuped up is a sparse file. (Rsync seems to have some kind of problems in regard to this (if the file does not exist yet on the other side... Then it will create a full file, not a sparse file)) Release the LVM Snapshot (By invoking a Script that I will write for that purpose) It's strange, I have nowhere found any documentation about this fact when searching the internet. Zmanda (Commercial Edition) has support vom XEN VM Backup (but not for KVM as far as I can tell)...

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • Postfix message ID originating process?

    - by Anders Braüner Nielsen
    Last night my postfix mail server(Debian Squeeze with dovecot, roundcube, opendkim and spamassassin enabled) started sending out spam from a single domain of mine like these: $cat mail.log|grep D6930B76EA9 Jul 31 23:50:09 myserver postfix/pickup[28675]: D6930B76EA9: uid=65534 from=<[email protected]> Jul 31 23:50:09 myserver postfix/cleanup[27889]: D6930B76EA9: message-id=<[email protected]> Jul 31 23:50:09 myserver postfix/qmgr[7018]: D6930B76EA9: from=<[email protected]>, size=957, nrcpt=1 (queue active) Jul 31 23:50:09 myserver postfix/error[7819]: D6930B76EA9: to=<[email protected]>, relay=none, delay=0.03, delays=0.02/0/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta5.am0.yahoodns.net[66.196.118.33] while sending RCPT TO) The domain in question did not have any accounts enabled but only a catchall alias set through postfixadmin - most emails were send from a specific address I use frequently but some were also sent from bogus addresses. None of the other virtual domains handled by postfix were affected. How can I find out what process was feeding postfix/sendmail or more info on where they originated? As far as I can tell php mail() wasn't used and I've run several open relay tests. I did a little tinkering(removed winbind from the server and ipv6 addresses from main.cf) after the attack and it seems to have subsided but I still have no idea how my server was suddenly sending out spam. Maybe I fixed it - maybe I didn't. Can anyone help figuring out how I was compromised? Anywhere else I should look? I've run Linux Malware Detect on recently changed files but nothing found.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Applications randomly alt-tab? (especially full screen games)

    - by Henry Scotts
    I'm not sure when this began, how it happens, or why it happens, but it is quite bothersome and apparently random. Just randomly throughout the day my computer will just go to the desktop. I could be in a full screen game and it will just immediately alt tab and present the desktop. Or I could be watching a movie and this happens. Sometimes it happens once every three hours and other times (just today actually) it did it twice in the span of 30 seconds. I am positive I am not pressing a hotkey because I launched a game, sat idle, and noticed it alt tab while cleaning up around my room after about 20 minutes. Sometimes it goes days without this happening. Specs: Windows 7 Ultimate 64 Bit, 10 gigs of RAM, GeForce GTX 260, Intel Xeon CPU. I also have basically nothing running when it happens other than the game and FireFox. My FireFox add-ons: Adblock Plus, Download Statusbar, Firebug, FirePHP, lazarus form recovery, tree-style-tabs, yslow. I doubt FireFox is causing the issue but I figured I'd include it anyway because it is the only application I have running when it happens. As for user processes I have running: VCDDaemon (context menu for virtual clone drive), razerhid (mouse), OSD, taskhost, dmw (desktop window manager), anyfullscreengame, audiorepeater, netsession_win, explorer, razerofa, tsvncache, firefox, plugin-container, and EKIJ5000MUI (printer). Whew. Okay. That was a lot of information. If someone could diagnose this I would be most grateful for this has been around with me for years.. Thanks for reading! PS: I doubt it's a virus because I never download illegal software and pretty much only browse Reddit and Stackexchange and play games. If it was a virus it would be a pretty lame one.. Hah..

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

  • EngineX ignores Auth Basic?

    - by Miko
    I have configured nginx to password protect a directory using auth_basic. The password prompt comes up and the login works fine. However... if I refuse to type in my credentials, and instead hit escape multiple times in a row, the page will eventually load w/o CSS and images. In other words, continuously telling the login prompt to go away will at some point allow the page to load anyway. Is this an issue with nginx, or my configuration? Here is my virtual host: 31 server { 32 server_name sub.domain.com; 33 root /www/sub.domain.com/; 34 35 location / { 36 index index.php index.html; 37 root /www/sub.domain.com; 38 auth_basic "Restricted"; 39 auth_basic_user_file /www/auth/sub.domain.com; 40 error_page 404 = /www/404.php; 41 } 42 43 location ~ \.php$ { 44 include /usr/local/nginx/conf/fastcgi_params; 45 } 46 } My server runs CentOS + nginx + php-fpm + xcache + mysql

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >