Search Results

Search found 20099 results on 804 pages for 'virtual host'.

Page 276/804 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • Set up Glassfish connection pool to talk to a database on a Ubuntu VPS

    - by Harry Pham
    On my Ubuntu VPS, i have a mysql server running and a Glassfish 3.0.1 Application Server running. And I am having a hard to have my GF successfully ping the database. Here is my GF set up Assume: x.y.z.t is the ip of my VPS Resource Type: javax.sql.ConnectionPoolDataSource User: root DatabaseName: scholar Url: jdbc:mysql://x.y.z.t:3306/scholar URL: jdbc:mysql://x.y.z.t:3306/scholar Password: xxxx PortNumber: 3306 ServerName: x.y.z.t Inside my glassfish3/glassfish/lib, I have my mysql-connector-java-5.1.13-bin.jar Inside the database, table mysql here is the result of the query select User, Host from user; +------------------+-----------+ | User | Host | +------------------+-----------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | yunaeyes | +------------------+-----------+ Now from my machine, if I try to connect to this db via mysql browser (mysql client software), well I cant. Well from the table above, seem like it only allow localhost to connect to this db. Keep in mind that both my db and my GF are on the same VPS. Please help

    Read the article

  • LM Sensors always returning same (invalid) value for one temp sensor

    - by pkaeding
    I am trying to monitor the temp sensors on a server, and plot them using Cacti. I have lm-sensors installed and working correctly. For example, here is the output from sensors: % sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8 C (crit = +100.0 C) temp2: +32.0 C (crit = +60.0 C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +36.0 C (high = +105.0 C, crit = +105.0 C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +42.0 C (high = +105.0 C, crit = +105.0 C) However, when I try to get this data via SNMP, I get only one sensor's temperature correctly, and another one always returns 100.000 C: % snmpwalk -Os -c public -v 1 10.8.0.18 -m ALL lmTempSensors lmTempSensorsIndex.1 = INTEGER: 0 lmTempSensorsIndex.2 = INTEGER: 1 lmTempSensorsDevice.1 = STRING: temp1 lmTempSensorsDevice.2 = STRING: temp1 lmTempSensorsValue.1 = Gauge32: 26800 lmTempSensorsValue.2 = Gauge32: 100000 So, my question is two-fold: Why is the second sensor that is returned by SNMP giving a value of 100 C (when it should be 32 C) Why are my CPU core sensors not being returned by SNMP?

    Read the article

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

  • Resolving CloudFlare DNS related mail delivery problems

    - by Andy Castles
    I recently started using CloudFlare and am having a few teething problems. Our domain is netlanguages.com and while we have a lot of sub-domains listen, we are currently only trialling a few of the servers through the CloudFlare CDN (for example, www.netlanguages.com is enabled for CDN, netlanguages.com is not). The actual CDN service seems to be reliable, but the problem that we are having is with DNS, and specifically with mail delivery. The background is that we have contact forms on our web site which use PHP mail() to send the details to end-users' email addresses, with the "from" address of the messages being [email protected] which is a valid address on our mail server. Most of the mails are arriving correctly, but a few specific people are not receiving them. The webserver uses qmail to deliver the messages, and the qmail log files show us some of the errors that the receiving mail servers return when they reject the mail delivery attempt. Two examples: Connected to 94.100.176.20 but sender was rejected./Remote host said: 421 DNS problem (interdominios.netlanguages.com). Try again later Connected to 213.186.33.29 but sender was rejected./Remote host said: 451 DNS temporary failure (#4.3.0) From what I can tell, the receiving SMTP server is doing a DNS lookup of some description on either the host of the "from" email address (netlanguages.com) or the server name given in the EHLO command of the SMTP conversation (in the first example above, interdominios.netlanguages.com), both of which should resolve to non-CloudFlare IP addresses. I've read that the CloudFlare DNS service is very reliable and fast but both of the problems above seem to point to a problem with remote servers unable to do DNS lookups. I should also point out that we changed our DNS to CloudFlare on 6th Feb, and since then started experiencing these mail delivery problems. On 22nd Feb we moved our DNS away from CloudFlare to see if the issues were related to CloudFlare and after a few hours delivery began to work. Then on 26th Feb I moved the DNS back to CloudFlare again and delivery problems started again. The issues definitely seems to be related to DNS, but I don't know if it's a configuration issue, or something else. Finally, I should say that our two DNS MX records point to non-CDN A record IP addresses, interdominios.netlanguages.com (the web and qmail server) also points to a non-CDN A record IP address. Does anyone know what the problem could be here? Any light you can shed on this will be most appreciated. Many thanks, Andy

    Read the article

  • Accessing a website's directory in IIS from File Zilla

    - by Cdeez
    I have my Asp.net website deployed in my IIS's Virtual directory. Usually a FTP software like File Zilla is used to upload files to a website's directory from a remote system. File Zilla asks for a Host name, Username, password to connect to the remote server. Now all I want is my users in LAN should be able to access this directory from their system using FTP software like FileZilla. So how can I provide the Host name, username and password to my website's directory. I tried to find it on google but no help. Detailed steps please. Its IIS 5.1 version.

    Read the article

  • Unmounting a zfs pool while it is shared with sharenfs

    - by Ted W.
    I have a Solaris (open indiana) system which is getting poor disk write performance. In order to enable ZIL in this version of zfs I need to add a line to /etc/system. This will not take affect until I've unmounted and remounted the zpool. The trick is that this spool is shared via nfs to about 200 other servers to host users' home directories. I can guarantee that no users will be accessing the disks during this period of maintenance but I would like to avoid having to issue an unmount for 200 systems in order to unmount the disk on the Solaris box. My question is, with sharenfs, is it necessary to have all systems disconnected before unmounting the filesystem on the host? If it's possible, how do you go about it? I've tried unmounting already, the normal way, and it reports the disk is busy. There is no lsof in Solaris and pfiles (I think that's what it was) does not show anything obviously using the mounts.

    Read the article

  • Varnish does not recognize req.hash

    - by Yogesh
    I have Varnish 3.0.2 on Redhat and service varnish start fails after I added vcl_hash section. I did varnishd and then loaded the vcl using vcl.load vcl.load default default.vcl Message from VCC-compiler: Unknown variable 'req.hash' At: ('input' Line 24 Pos 9) set req.hash += req.url; --------########------------ Running VCC-compiler failed, exit 1 cat default.vcl backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if( req.url ~ "\.(css|js|jpg|jpeg|png|swf|ico|gif|jsp)$" ) { unset req.http.cookie; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if( req.httpCookie == "JSESSIONID" ) { set req.http.X-Varnish-Hashed-On = regsub( req.http.Cookie, "^.*?JSESSIONID=([a-zA-z0-9]{32}\.[a-zA-Z0-9]+)([\s$\n])*.*?$", "\1" ); set req.hash += req.http.X-Varnish-Hashed-On; } return(hash); } What could be wrong?

    Read the article

  • DNS resolution not working on Browsers but on Shell

    - by Shyam Sunder Verma
    Running dig/ping on any domain, give me correct ip. When I try to browse google.com in browser it does not work. When I pick the IP (via ping) and use it in browser, website open via IP fine. But further work does not work, because of name resolution problem. DO NOT works fine on : Ubuntu 9.10 installed in Virtual Box over Windows. Ubuntu 10.10 installed in Virtual Box over Windows. Ubuntu 9.10 installed on laptop. But Internet works fine on windows vista installed on laptop.

    Read the article

  • Getting MSExchange transport Error on Server 2003 SP2

    - by Scott
    I am getting the following Error messages and do not know how to fix it. Event Type: Error Event Source: MSExchangeTransport Event Category: (8) Event ID: 3017 Date: 4/29/2010 Time: 1:21:12 PM User: N/A Computer: NETSRV Description: A non-delivery report with a status code of 5.3.5 was generated for recipient rfc822;[email protected] (Message-ID <19104335.51321272561635734.JavaMail.SYSTEM@PARROT). Causes: A looping condition was detected. (The server is configured to route mail back to itself). If you have multiple SMTP Virtual Servers configured on your Exchange server, make sure they are defined by a unique incoming port and that the outgoing SMTP port configuration is valid to avoid looping between local virtual servers. Thanks for any help you can provide.

    Read the article

  • VMWare-Tools Installation fails on Ubuntu 11.04

    - by Ajay
    I am trying to install VMwareTools-8.4.6-385536.tar.gz (VMWare Tools) on the following operating system: Ubuntu 11.04 - Linux ubuntu 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux I am using VMPlayer version 3.1.4 - build 385536 ==================================================================== After starting the installation i am getting the following errors: What is the directory that contains the init scripts? [/etc/init.d] Error opening No such file or directory ================================================ Distribution provided drivers for Xorg X server are used. Skipping X configuration because X drivers are not included. Creating a new initrd boot image for the kernel. update-initramfs: Generating /boot/initrd.img-2.6.38-8-generic Starting VMware Tools services in the virtual machine: Switching to guest configuration: done Blocking file system: done Guest operating system daemon: failed Virtual Printing daemon: done Unable to start services for VMware Tools Can somebody help in this?

    Read the article

  • Snapshot/rollback for libvirt+KVM?

    - by jtimberman
    I've recently begun using KVM for my development/test environment on a Linux host system with 8G memory. Prior, I was using VMware Fusion for my virtual environment, but my Macbook only has 2G memory. I tried VMware Server and ESX on the host instead of KVM, but the webUI doesn't run on Mac OSX's Firefox, and we're going to be doing more with KVM anyway. The main feature of VMware I miss is robust snapshot/rollback, but I'm missing this in KVM. I understand the snapshot command, but it shuts down the guest OS when complete, and then copying the disk image to preserve its state is cumbersome. Is this really the best way to manage snapshots on KVM?

    Read the article

  • Hylafax and "No response to MPS"

    - by Joril
    We have an Hylafax 5.2.5 CentOS 5 installation hosted inside a Xen virtual machine. It works quite well, but now I'm in the process of upgrading/migrating it to a KVM virtual machine running Ubuntu 10.04 and Hylafax 5.5.1 (compiled from source using http://sourceforge.net/projects/hylafax/files/hylafax%20debian%20build%20files/ ) The problem I'm having is that - while receiving works fine - sending faxes is extremely unreliable, I get lots of "No response to MPS repeated 3 tries", or "Failure to transmit clean ECM image data." The line, modem and configuration files I'm using are the same as before, so I thought that it could be a KVM scheduling issue, but even setting cpu_shares to 10240 instead of 1024 doesn't change a thing... What else could I try? Here's an example log file http://pastebin.com/cN01cpEs

    Read the article

  • How to make TimeMachine back up contents of any path or mounted volume

    - by Olfan
    I keep different types of data in different encrypted sparsebundle images (say, one for each client) which automatically mount upon login but can't be opened by anybody other than myself. So, after login I have a number of virtual volumes in /Volumes/ which keeps my client data both secure and organized. How do I include data inside these virtual Volumes in TimeMachine's backups, or data residing in any path on any partition/volume? I found a promising solution description at blog.eurocomp.info involving editing the com.apple.TimeMachine.plist but all I can get TimeMachine to do is backing up the sparsebundle files themselves. I want it to back up the files inside the mounted image, though - something like adding /Volumes/Client_abc/ to TimeMachine's search path. Please do not redirect my to this previous question as it doesn't solve the problem at all. Please also refrain from telling me why you think I should not want this answer as that will not solve anything either. Please lastly don't say "it can't be done" unless you can technically prove that claim.

    Read the article

  • Windows Server 2008 hangs up while booting

    - by Jim R
    Windows Server 2008 hangs up while booting after Windows update applied several updates. The server is a virtual instance on a Server 2008 Hyper-V host. Other virtual servers are fine, but have not been updated. The normal boot shows the horizontal barber poll forever. When I do a safe boot it also hangs up. With a "Please Wait..." after loading many '.sys' files. The last successfully loaded file listed is: '\Windows\system32\drivers\crcdisk.sys' That is the extent of what I have been able to determine.

    Read the article

  • Connection from Apache to Tomcat via mod_jk not working

    - by Tobias Schittkowski
    I would like to connect apache to tomcat via mod_jk (same machine). The ajp connector in tomcat is listening on port 8009, the worker settings are: worker.worker1.port=8009 worker.worker1.host=localhost However, the connection fails, here is the mod_jk debug log: [debug] wc_get_name_for_type::jk_worker.c (292): Found worker type 'ajp13' [debug] init_ws_service::mod_jk.c (1097): Service protocol=HTTP/1.1 method=GET ssl=false host=(null) addr=127.0.0.1 name=localhost port=80 auth=(null) user=(null) laddr=127.0.0.1 raddr=127.0.0.1 uri=/share [debug] ajp_get_endpoint::jk_ajp_common.c (3154): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (626): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2449): processing worker1 with 2 retries [debug] ajp_send_request::jk_ajp_common.c (1623): (worker1) all endpoints are disconnected. [debug] jk_open_socket::jk_connect.c (485): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (609): trying to connect socket 560 to 0.0.0.0:0 [info] jk_open_socket::jk_connect.c (627): connect to 0.0.0.0:0 failed (errno=47) [info] ajp_connect_to_endpoint::jk_ajp_common.c (995): Failed opening socket to (0.0.0.0:0) (errno=47) Why does mod_jk try to connect to 0.0.0.0:0 and not to 127.0.0.1:8009??? Thank you for your help! Tobias

    Read the article

  • Can SATA be used to connect computers?

    - by André
    Can SATA be used to connect two computers together, just like a crossover Ethernet cable would do ? I know SATA has no "networking" features and even though a controller may have multiple ports, the drives don't "see" each other, and that in SATA one device acts as the host (the computer) and the other device is some kind of "client" (the storage drive). But still, did anyone attempt to make a kernel module that would make one computer appear as a "client" (so that the host's SATA controller detects it as a standard hard drive) and then set up like a pseudo-Ethernet link or a very high speed serial link (and then run pppd on it and do networking) ? Note : I know this is an unprofessional and totally stupid idea, I'm just asking out of curiosity.

    Read the article

  • Ubuntu web site hosting & free ,tk domain

    - by user5819
    Hello, I am sort of new to web hosting so sorry if I ask bad questions. I have a pc that runs ubuntu I instaled apache and now I host a web site, but I need a domain name so I found out .tk is free. The site works when typing 192.168.1.x in the browser(x= a number) but in dot.tk when I register in ip it whats one that look like 79.117.x.x so thats where I get stuck, I think I managed to make my ip address static but it still looks like 192.168.1.x and I can't put that in because it says: " This IP address is not valid". Why must it have the ip address that looks like 79.117.x.x and won't work with the internal static one and how can I do to host my site with a .tk domain name ? PS: I'm using a cisco router that's connected with computer via a cable.

    Read the article

  • Ghost Image - windows asks for activation on when deployed to VM

    - by Chris Sobolewski
    I have several images created with Ghost Solution Suite (v11 I believe), the images have been in use for a few years now, but I am finally to the point where I have enough time to attempt to virtualize them for easier updates. I am running VMWare and attempting to image the virtual machines with my ghost image files. For my images I am running sysprep with minisetup and using reseal. The image deploys successfully, however when I start the VM for the first time, it demands windows activation. This doesn't happen when I image a physical computer, even a different model with different hardware. The idea of virtualizing my images becomes rather worthless if I am unable to deploy the images without having to activate every time (especially as Microsoft keeps declaring our volume licence key as invalid for activations). Does anyone know why it is asking for activation on a virtual machine, but not a physical PC? How can I prevent this?

    Read the article

  • Rebooting Guest OS on a Hyper-V 2008 R2 Cluster results in a Shutdown

    - by S_Kuwahara
    Hi Folks, I have an interessting issue here. Sometimes when I manually reboot some of my guest OS (W2K3 / W2K8) on my Hyper-V 2008 R2 Cluster it does not reboot, it just shuts down. When I'm talking about a manual reboot, I mean connecting with RPD to the virtual server and use the shutdown funktion in the OS itself. I than have to start the virtual machine again via SCVMM / Hyper-V manager and it works just fine. There is nothing special in the eventlog of the host or guest OS. There is also nothing special logged in SCVMM. The guest OS all have the integration tools installed. Any hints? Thanks in Advance

    Read the article

  • How can I mitigate DNS Server outages?

    - by Eric Belair
    Let's say I have a root domain of "mysite.com". That domain and its sub-domains have DNS served by an external service - let's call them Setwork Nolutions. If this external company is hit with a DDoS attack, my interally-hosted websites under this domain are no longer accessible at "mysite.com" or "*.mysite.com", even though the website(s) is/are fully up and operational. How can I mitigate such a problem so as to keep end users happy? The only solution others at my company have come up with is to create a second domain - i.e. "mysite2.com", and host its DNS at another company, and then communicate to all end users that this is the website they should use. I think this is ridiculous, and just leads to a bunch of other problems. I'd like to find a solution where we can point to the same website with the same URL without the original DNS host being operational. Any thoughts?

    Read the article

  • Nagios3 gives a warning on HTTP service monitoring

    - by Dez
    Already set up my local net configuration to be monitored by Nagios3. I found a problem that Nagios3 reports a warning in the HTTP monitoring service of a Debian server set at ip 192.168.1.52, that has an individual virtual host and a mass virtual host for application development. I get this status message: HTTP WARNING: HTTP/1.1 404 Not Found I used the Nagios tools to check. servername is the name of the vhost server name I used in the Apache configuration. /usr/lib/nagios/plugins/check_http -H servername -I 192.168.1.52 receiving this status message: HTTP OK HTTP/1.1 200 OK - 37900 bytes in 0.504 seconds |time=0.503946s;;;0.000000 size=37900B;;;0 But when I check like this: /usr/lib/nagios/plugins/check_http -I 192.168.1.52 I get the same status message as the warning, so I assume that I don't have Nagios completely well set up because doesn't recognize the vhosts for that server, how it should be as the check_http service shows. Where should I look to fix that warning?

    Read the article

  • How to copy directories using debugfs?

    - by STM
    The debugfs manpage gives the impression that the command 'rdump . .' will recursively copy all files found on the specified filesystem from the debugfs cwd to the native filesystem's cwd. Instead I seem to receive a syntax error, and no copy is initiated? These are the commands I run: cd /path/to/transfer/destination debugfs /dev/sda1 -R rdump . . My task is to copy the entire contents of a clean yet unmountable USB storage device to its host machine's HD. The host machine does not support the inode size used by the USB device's filesystem (256) and its software is not upgradeable, so my intention was to use debugfs to transfer the files. If anyone has any other suggestions for this task I'd be grateful.

    Read the article

  • How can I secure Postgres for remote access when not in a private network?

    - by orokusaki
    I have a database server on a VMWare VM (Ubuntu 12.04.1 LTS server), and it just occurred to me that the server is accessible via the web, since the same physical server contains a VM that hosts public websites. My iptables in the database are such that only SSH traffic, loopback traffic, and TCP on port 5432 are allowed. I will only allow host access to the Postgres server from the IP of the other VM on the same physical machine. Does this seem sufficient for security, assuming there aren't gaping holes in my general OS configuration, or is Postgres one of those services that should never be web facing, (assuming there are some of "those"). Will I need to use hostssl instead of host in my pg_hba.conf, even though the data will travel only on my own network, presumably?

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >