Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 279/792 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Virtualized CPU cores vs. threads

    - by nedm
    We've got a KVM host system on Ubuntu 9.10 with a newer Quad-core Xeon CPU with hyperthreading. As detailed on Intel's product page, the processor has 4 cores but 8 threads. /proc/cpuinfo and htop both list 8 processors, though each one states 4 cores in cpuinfo. KVM/QEMU also reports 8 VCPUs available to assign to guests. My question is when I'm allocating VCPUs to VM guests, should I allocate per-core or per-thread? Since KVM/QEMU reports the server has 8 VCPUs to allocate, should I go ahead and set a guest to use 4 CPUs where I previously would have set it to use 2 (assuming 4 total VCPUs available)? I'd like to get the most possible out of the host hardware without over-allocating.

    Read the article

  • Can't create a valid symlink under VMWare HGFS

    - by Alexander Gladysh
    Host: OS X 10.6.5 VMWare Fusion: 3.1.2 Guest: Ubuntu x86 10.10 $ uname -a Linux ubuntu 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux I can not create a symlink, readable from the Guest OS anywhere in the directory, mounted with hgfs: /mnt/hgfs/projects/tmp$ touch aaa /mnt/hgfs/projects/tmp$ ln -s aaa bbb /mnt/hgfs/projects/tmp$ less bbb bbb: No such file or directory /mnt/hgfs/projects/tmp$ ls -la total 6 drwxr-xr-x 1 501 users 136 2010-12-28 18:12 . drwxr-xr-x 1 501 users 8602 2010-12-28 18:12 .. -rw-r--r-- 1 501 users 0 2010-12-28 18:12 aaa lrwxr-xr-x 1 501 users 3 2010-12-28 18:12 bbb - aaa /mnt/hgfs/projects/tmp$ readlink bbb aaa The same symlink is perfectly accessible in OS X host. Is there a workaround for this?

    Read the article

  • Set up Glassfish connection pool to talk to a database on a Ubuntu VPS

    - by Harry Pham
    On my Ubuntu VPS, i have a mysql server running and a Glassfish 3.0.1 Application Server running. And I am having a hard to have my GF successfully ping the database. Here is my GF set up Assume: x.y.z.t is the ip of my VPS Resource Type: javax.sql.ConnectionPoolDataSource User: root DatabaseName: scholar Url: jdbc:mysql://x.y.z.t:3306/scholar URL: jdbc:mysql://x.y.z.t:3306/scholar Password: xxxx PortNumber: 3306 ServerName: x.y.z.t Inside my glassfish3/glassfish/lib, I have my mysql-connector-java-5.1.13-bin.jar Inside the database, table mysql here is the result of the query select User, Host from user; +------------------+-----------+ | User | Host | +------------------+-----------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | yunaeyes | +------------------+-----------+ Now from my machine, if I try to connect to this db via mysql browser (mysql client software), well I cant. Well from the table above, seem like it only allow localhost to connect to this db. Keep in mind that both my db and my GF are on the same VPS. Please help

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

  • create print server port via command line error Win 8

    - by Benjamin Jones
    I need to create a Print Server Port via commandline in Windows 8 Per Google search I should be using prnport.vbs script to do so: cscript c:\Windows\System32\Printing_Admin_Scripts\en-US\prnport.vbs -a -s \\192.168.113.253 -r Xerox_192.168.113.253 However I get this error: ** Unable to connect to WMI service Error 0x800706BA The RPC Server is unavailable. ** I looked at local services and both RPC and WMI services are started . Also I made sure add remote admin rule to Windows Firewall via command line without success!: netsh advfirewall firewall set rule group="windows management instrumentation (wmi)" new enable=yes netsh advfirewall firewall set rule group="remote administration" new enable=yes NOTE: If I use the GUI to create the print server port then add the printer via command line: rundll32 printui.dll,PrintUIEntry /if /b "Xerox WorkCenter 7535" /F C:\Windows\Inf\WC7545-7556_PCL6_x64_Driver\x2DNORX.inf /r "Xerox_192.168.113.253" /m "Xerox WorkCentre 7535 PCL6" THE PRINTER IS SUCCESSFULLY ADDED. So its NOT the printer it self! So how can I successfully add a print server port via command line? Thanks

    Read the article

  • Audio doesn't work on Windows XP guest (WS 7.0)

    - by Mads
    Hi, I can't get audio to work with on a Windows XP guest running on VMware Workstation 7.0 and Ubuntu 9.10 host. Windows fails to produce any audio output and the Windows device manager says the Multimedia Audio Controller is not working properly. Audio is working fine in the host OS. When I open Multimedia Audio Controller properties it says: Device status: The drivers for this device are not installed (Code 28) If I try to reinstall the driver I get the following error message: "Cannot Install this Hardware There was a problem installing this hardware: Multimedia Audio Controller An Error occurred during the installation of the device Driver is not intended for this platform" Has anyone else experienced this problem?

    Read the article

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

  • Email notification and mail server

    - by Jerr Wu
    I am building a web application with email notification just like Facebook, which will host in http://www.linode.com/. When a user A comment to a post, the poster will get an email notification from '[email protected]' with the comment message written by user A. (Not spam) I really like Google Apps but they have sending limits 2000 sending per day, that is not suit for my case becuz I cannot have sending limits. There will be many email notifications. http://support.google.com/a/bin/answer.py?hl=en&answer=166852 I also need company email accounts for team members use which I prefer Google Apps. My web application will host in linode, I am considering "Amazon Simple Notification Service" for the email notification. My questions are Any other recommend email service provider suits my case for me? Can I bind company email accounts(ex: [email protected]) with Google Apps and bind [email protected] with other email service provider?

    Read the article

  • How to display programs, started by TSWA Remoteapp, inside a browser instead of directly on the desk

    - by richardboon
    For those not familiar with Terminal Services Web Access and Resulting Internet Communication in Windows Server 2008, here’s a brief overview: technet.microsoft.com/en-us/library/cc754502(WS.10).aspx The problem (I am trying to solve), can be seen in the picture of step 16, where the application is display directly right on the desktop [see link below]: http://blogs.technet.com/askcore/archive/2008/07/22/publishing-the-hyper-v-management-interface-using-terminal-services.aspx I am in the process of setting up Terminal Service Web Access RemoteApp for our company. Users only want remoteapp and needs to see the remote program running within/contain-inside the browser. They don’t want to see or access the whole desktop [as the case with remote desktop, which can be displayed inside a browser].

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • Can't reconnect to my RDP session

    - by Jeremy Stein
    I use a VM through RDP. When I'm done for the day, I just disconnect the session and reconnect in the morning. That allows me to pick up what I was doing and not close all the applications. As I'm the only user, this generally works well. Today, I can't reconnect to my session from yesterday. When I RDP, I get a new session. When I run query user, I can see my other session: USERNAME SESSIONNAME ID STATE IDLE TIME LOGON TIME me rdp-tcp#82 1 Active 15:00 4/22/2010 9:00 AM me rdp-tcp#91 2 Active . 4/30/2010 9:00 AM If I try to use Terminal Services Manager to remote control the other session, I get this error: Session (ID 1) remote control failed (Error 7044 - The request to control another session remotely was denied. ) Is there any way to reconnect to this session, or do I need to just kill it?

    Read the article

  • Resolving CloudFlare DNS related mail delivery problems

    - by Andy Castles
    I recently started using CloudFlare and am having a few teething problems. Our domain is netlanguages.com and while we have a lot of sub-domains listen, we are currently only trialling a few of the servers through the CloudFlare CDN (for example, www.netlanguages.com is enabled for CDN, netlanguages.com is not). The actual CDN service seems to be reliable, but the problem that we are having is with DNS, and specifically with mail delivery. The background is that we have contact forms on our web site which use PHP mail() to send the details to end-users' email addresses, with the "from" address of the messages being [email protected] which is a valid address on our mail server. Most of the mails are arriving correctly, but a few specific people are not receiving them. The webserver uses qmail to deliver the messages, and the qmail log files show us some of the errors that the receiving mail servers return when they reject the mail delivery attempt. Two examples: Connected to 94.100.176.20 but sender was rejected./Remote host said: 421 DNS problem (interdominios.netlanguages.com). Try again later Connected to 213.186.33.29 but sender was rejected./Remote host said: 451 DNS temporary failure (#4.3.0) From what I can tell, the receiving SMTP server is doing a DNS lookup of some description on either the host of the "from" email address (netlanguages.com) or the server name given in the EHLO command of the SMTP conversation (in the first example above, interdominios.netlanguages.com), both of which should resolve to non-CloudFlare IP addresses. I've read that the CloudFlare DNS service is very reliable and fast but both of the problems above seem to point to a problem with remote servers unable to do DNS lookups. I should also point out that we changed our DNS to CloudFlare on 6th Feb, and since then started experiencing these mail delivery problems. On 22nd Feb we moved our DNS away from CloudFlare to see if the issues were related to CloudFlare and after a few hours delivery began to work. Then on 26th Feb I moved the DNS back to CloudFlare again and delivery problems started again. The issues definitely seems to be related to DNS, but I don't know if it's a configuration issue, or something else. Finally, I should say that our two DNS MX records point to non-CDN A record IP addresses, interdominios.netlanguages.com (the web and qmail server) also points to a non-CDN A record IP address. Does anyone know what the problem could be here? Any light you can shed on this will be most appreciated. Many thanks, Andy

    Read the article

  • Unmounting a zfs pool while it is shared with sharenfs

    - by Ted W.
    I have a Solaris (open indiana) system which is getting poor disk write performance. In order to enable ZIL in this version of zfs I need to add a line to /etc/system. This will not take affect until I've unmounted and remounted the zpool. The trick is that this spool is shared via nfs to about 200 other servers to host users' home directories. I can guarantee that no users will be accessing the disks during this period of maintenance but I would like to avoid having to issue an unmount for 200 systems in order to unmount the disk on the Solaris box. My question is, with sharenfs, is it necessary to have all systems disconnected before unmounting the filesystem on the host? If it's possible, how do you go about it? I've tried unmounting already, the normal way, and it reports the disk is busy. There is no lsof in Solaris and pfiles (I think that's what it was) does not show anything obviously using the mounts.

    Read the article

  • Accessing a website's directory in IIS from File Zilla

    - by Cdeez
    I have my Asp.net website deployed in my IIS's Virtual directory. Usually a FTP software like File Zilla is used to upload files to a website's directory from a remote system. File Zilla asks for a Host name, Username, password to connect to the remote server. Now all I want is my users in LAN should be able to access this directory from their system using FTP software like FileZilla. So how can I provide the Host name, username and password to my website's directory. I tried to find it on google but no help. Detailed steps please. Its IIS 5.1 version.

    Read the article

  • Varnish does not recognize req.hash

    - by Yogesh
    I have Varnish 3.0.2 on Redhat and service varnish start fails after I added vcl_hash section. I did varnishd and then loaded the vcl using vcl.load vcl.load default default.vcl Message from VCC-compiler: Unknown variable 'req.hash' At: ('input' Line 24 Pos 9) set req.hash += req.url; --------########------------ Running VCC-compiler failed, exit 1 cat default.vcl backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if( req.url ~ "\.(css|js|jpg|jpeg|png|swf|ico|gif|jsp)$" ) { unset req.http.cookie; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if( req.httpCookie == "JSESSIONID" ) { set req.http.X-Varnish-Hashed-On = regsub( req.http.Cookie, "^.*?JSESSIONID=([a-zA-z0-9]{32}\.[a-zA-Z0-9]+)([\s$\n])*.*?$", "\1" ); set req.hash += req.http.X-Varnish-Hashed-On; } return(hash); } What could be wrong?

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • StrongSwan + xl2tpd client timeout between 2-5 minutes

    - by Howard Guo
    I run CentOS 6.4 on Amazon EC2, using xl2tpd-1.3.1 from EPEL repository together with StrongSwan 5.0.4. I setup a simple IPSec connection: conn l2tp type=transport keyexchange=ikev1 rekey=no authby=psk leftsubnet=0.0.0.0/0 rightsubnet=0.0.0.0/0 compress=yes auto=add And here is xl2tpd.conf: [global] ipsec saref = yes [lns default] ip range = 192.168.0.2-192.168.0.250 local ip = 192.168.0.1 ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes Here is options.xl2tpd: ms-dns 8.8.4.4 auth lock debug proxyarp There is only one client - Android 4.2 Android connects successfully: Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Connection established to x.x.x.x, 59578. Local: 18934, Remote: 29291 (ref=0/0). LNS session is 'default' Oct 27 19:45:02 ip-172-31-17-30 xl2tpd[2706]: Call established with x.x.x.x, Local: 36452, Remote: 29845, Serial: -1369754322 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: pppd 2.4.5 started by howard, uid 0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Using interface ppp0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Connect: ppp0 <--> /dev/pts/0 Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: peer from calling number x.x.x.x authorized Oct 27 19:45:02 ip-172-31-17-30 pppd[2709]: Deflate (15) compression enabled Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: Cannot determine ethernet address for proxy ARP Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: local IP address 192.168.0.1 Oct 27 19:45:03 ip-172-31-17-30 pppd[2709]: remote IP address 192.168.0.2 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 disappeared from ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] 192.168.0.1 appeared on ppp0 Oct 27 19:45:03 ip-172-31-17-30 charon: 06[KNL] interface ppp0 activated In the meanwhile, Internet works perfectly on the Android client, the VPN connection is stable and fast. However, it always happens that within 2-5 minutes after the connection is established: Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Maximum retries exceeded for tunnel 18934. Closing. Oct 27 19:47:07 ip-172-31-17-30 xl2tpd[2706]: Connection 29291 closed to 95.91.227.224, port 59578 (Timeout) Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deactivated Oct 27 19:47:07 ip-172-31-17-30 charon: 06[KNL] interface ppp0 deleted Then the VPN connection is broken. So what might have gone wrong? The same L2TP service works flawlessly on iOS 7, MacOS 10.8, and Windows 7, there is no disconnection issue on those OSes. Thank you!

    Read the article

  • How to get Virtual PC to recognize MIDI devices?

    - by bparker
    Hey all. I have an XP Pro virtual machine running inside Virtual PC 2007. My host machine is x64 Windows 7. I have a MIDI keyboard hooked up to my machine via a Turtle Beach USB to MIDI 1x1 cable. I have installed the driver and software on my host machine and ran a soundcheck, and everything appears to be working fine. Playback is sent to the MIDI device with no problems. However, when I attempt to install the driver and run a soundcheck in my XP virtual machine, the device is not found. Other USB devices (mouse, keyboard, flash drives) work fine in the virtual machine, but not they MIDI keyboard. I'm not sure what steps to take in order to troubleshoot the and get the VM to start recognizing the MIDI keyboard. Any help or suggestions would be greatly appreciated. Thanks.

    Read the article

  • Snapshot/rollback for libvirt+KVM?

    - by jtimberman
    I've recently begun using KVM for my development/test environment on a Linux host system with 8G memory. Prior, I was using VMware Fusion for my virtual environment, but my Macbook only has 2G memory. I tried VMware Server and ESX on the host instead of KVM, but the webUI doesn't run on Mac OSX's Firefox, and we're going to be doing more with KVM anyway. The main feature of VMware I miss is robust snapshot/rollback, but I'm missing this in KVM. I understand the snapshot command, but it shuts down the guest OS when complete, and then copying the disk image to preserve its state is cumbersome. Is this really the best way to manage snapshots on KVM?

    Read the article

  • Forward one RDP port on one machine to multiple external users at the same time

    - by matnagel
    We have a windows server 2003 machine with rdp service listening on the standard port 3389. For security reasons this port is not opened on the router, but we have freesshd service running and a remote admin can login via ssh and this port is forwarded to external port 33001 for the first external user. This works great. Now we have another admin who wants to work remote (he uses a different windows account, but needs to work on the same machine.) So this is basically a ssh port forwarding question. Will the other user be able to login at the same time using the same port 33001 ? Please keep in mind that there will be a second tunnel, and this second tunnel will also use the local port 3389 on the windows server.

    Read the article

  • Connection from Apache to Tomcat via mod_jk not working

    - by Tobias Schittkowski
    I would like to connect apache to tomcat via mod_jk (same machine). The ajp connector in tomcat is listening on port 8009, the worker settings are: worker.worker1.port=8009 worker.worker1.host=localhost However, the connection fails, here is the mod_jk debug log: [debug] wc_get_name_for_type::jk_worker.c (292): Found worker type 'ajp13' [debug] init_ws_service::mod_jk.c (1097): Service protocol=HTTP/1.1 method=GET ssl=false host=(null) addr=127.0.0.1 name=localhost port=80 auth=(null) user=(null) laddr=127.0.0.1 raddr=127.0.0.1 uri=/share [debug] ajp_get_endpoint::jk_ajp_common.c (3154): acquired connection pool slot=0 after 0 retries [debug] ajp_marshal_into_msgb::jk_ajp_common.c (626): ajp marshaling done [debug] ajp_service::jk_ajp_common.c (2449): processing worker1 with 2 retries [debug] ajp_send_request::jk_ajp_common.c (1623): (worker1) all endpoints are disconnected. [debug] jk_open_socket::jk_connect.c (485): socket TCP_NODELAY set to On [debug] jk_open_socket::jk_connect.c (609): trying to connect socket 560 to 0.0.0.0:0 [info] jk_open_socket::jk_connect.c (627): connect to 0.0.0.0:0 failed (errno=47) [info] ajp_connect_to_endpoint::jk_ajp_common.c (995): Failed opening socket to (0.0.0.0:0) (errno=47) Why does mod_jk try to connect to 0.0.0.0:0 and not to 127.0.0.1:8009??? Thank you for your help! Tobias

    Read the article

  • Install Nod32 antivirus silently?

    - by IT Tech
    Hi guys/gals, I was hoping that I could find someone on here that may know if it is possible to install Nod32 on a client PC silently. We have some software that will copy the MSI to the remote PC and run it with any specified parameters. The software needs you to import the license key file and tell it where the server is. Is there any way that these details can be pre specified so the user doesn't have to have any interaction on the remote computer. Many thanks in advance.

    Read the article

  • Can SATA be used to connect computers?

    - by André
    Can SATA be used to connect two computers together, just like a crossover Ethernet cable would do ? I know SATA has no "networking" features and even though a controller may have multiple ports, the drives don't "see" each other, and that in SATA one device acts as the host (the computer) and the other device is some kind of "client" (the storage drive). But still, did anyone attempt to make a kernel module that would make one computer appear as a "client" (so that the host's SATA controller detects it as a standard hard drive) and then set up like a pseudo-Ethernet link or a very high speed serial link (and then run pppd on it and do networking) ? Note : I know this is an unprofessional and totally stupid idea, I'm just asking out of curiosity.

    Read the article

  • Ubuntu web site hosting & free ,tk domain

    - by user5819
    Hello, I am sort of new to web hosting so sorry if I ask bad questions. I have a pc that runs ubuntu I instaled apache and now I host a web site, but I need a domain name so I found out .tk is free. The site works when typing 192.168.1.x in the browser(x= a number) but in dot.tk when I register in ip it whats one that look like 79.117.x.x so thats where I get stuck, I think I managed to make my ip address static but it still looks like 192.168.1.x and I can't put that in because it says: " This IP address is not valid". Why must it have the ip address that looks like 79.117.x.x and won't work with the internal static one and how can I do to host my site with a .tk domain name ? PS: I'm using a cisco router that's connected with computer via a cable.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >