Search Results

Search found 21090 results on 844 pages for 'webservices client'.

Page 709/844 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • jdbc4 CommunicationsException

    - by letronje
    I have a machine running a java app talking to a mysql instance running on the same instance. the app uses jdbc4 drivers from mysql. I keep getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException at random times. Here is the whole message. Could not open JDBC Connection for transaction; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was25899 milliseconds ago.The last packet sent successfully to the server was 25899 milliseconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. For mysql, the value of global 'wait_timeout' and 'interactive_timeout' is set to 3600 seconds and 'connect_timeout' is set to 60 secs. the wait timeout value is much higher than the 26 secs(25899 msecs). mentioned in the exception trace. I use dbcp for connection pooling and here is spring bean config for the datasource. <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource" > <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/db"/> <property name="username" value="xxx"/> <property name="password" value="xxx" /> <property name="poolPreparedStatements" value="false" /> <property name="maxActive" value="3" /> <property name="maxIdle" value="3" /> </bean> Any idea why this could be happening? Will using c3p0 solve the problem ?

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • postfix smtpd rejecting mail from outside network match_list_match: no match

    - by Loopo
    My postfix (V: 2.5.5-1.1) running on ubuntu server (9.04) started to reject mail arriving in from outside about 2 weeks ago. Doing a "manual" session via telnet shows that the connection is always closed after the MAIL FROM: [email protected] line is input, with the message "Connection closed by foreign host." Doing the same from another client inside the LAN works fine. In the log files I get the line "lost connection after MAIL from xxxxx.tld[xxx.xxx.xxx.xxx]" This is after some lines like: match_hostaddr: XXX.XXX.XXX.XXX ~? [::1]/128 match_hostname: XXXX.tld ~? 192.168.1.0/24 ... match_list_match: xxx.xxx.xxx.xxx: no match which seem to suggest some kind of filter which checks for allowed addresses. I have been unable to locate where this filter lives, or how to turn it off. I'm not even sure if that's what's causing my problem. Connections from inside the LAN don't get disconnected even though they also show a "match_list_match: ... no match" line. I didn't change any configuration files recently, below is my main.cf as it currently stands. I don't really know what all the parameters do and how they interact. I just set it up initially and it worked fine (up to recently). smtpd_banner = $myhostname ESMTP $mail_name (GNU) biff = no readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/server.crt smtpd_tls_key_file=/etc/ssl/private/server.key #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_sasl_auth_enable = no smtp_use_tls=no smtp_sasl_password_maps = hash:/etc/postfix/smtp_auth myhostname = XXXXXXX.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = XXXX.XXXX.com, XXXX.com, localhost.XXXXX.com, localhost relayhost = XXX.XXX.XXX.XXX mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/24 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_sasl_local_domain = #smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_ when checking the process list, postfix/smtpd runs as smtpd -n smtp -t inet -u -c -o stress -v -v Any clues?

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • Puppet Decentralized Setup

    - by paul.tw
    I want to migrate my existing Puppet setup from master/client to a decentralized solution. I know other solutions, such as Ansible are easier to setup for that purpose, but I really want to stay with Puppet. I found "supply_drop"(https://github.com/pitluga/supply_drop) on github, so I followed the instructions and did the following: rvm gemset create testing rvm use 1.9.3@testing gem install supply_drop The output is the following: [m@ms-MacBook-Pro:~ $ irb 1.9.3-p547 :001 require 'supply_drop' NameError: uninitialized constant Capistrano from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop/tasks.rb:1:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in `require' from /Users/m/.rvm/gems/ruby-1.9.3-p547@testing/gems/supply_drop-0.17.0/lib/supply_drop.rb:10:in `' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:135:in `rescue in require' from /Users/m/.rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:144:in `require' from (irb):1 from /Users/m/.rvm/rubies/ruby-1.9.3-p547/bin/irb:12:in `' Since that doesn't work without problems, I was wondering which alternatives are there available to do the same. Do you have any suggestings?

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Merge changes in Microsoft Word documents

    - by Álvaro G. Vicario
    I'm using Microsoft Word 2002 to maintain some documentation. The documents are stored in a version control repository (Subversion) together with the source code it documents. My Subversion client (TortoiseSVN) comes with a little VBA script that allows to leverage the built-in revisions feature when merging different branches. In other words, when I want to copy changes from one document to another, Word compares both documents (source and target) and builds a third document that has the contents of the source doc tagged as revisions, so I can then review differences one by one and confirm or discard changes. While this is handy, it also means that making a single change to the source document forces me to review all the differences between both documents and discard all of them except the only actual change. My questions is... Do you know about an application or plug-in that's able to find the differences between two Word documents and apply those differences to a third document? (I know 2002 is very old but that's what my company gives me; I'm open to solutions that use newer versions though.)

    Read the article

  • Forcing a particular SSL protocol for an nginx proxying server

    - by vitch
    I am developing an application against a remote https web service. While developing I need to proxy requests from my local development server (running nginx on ubuntu) to the remote https web server. Here is the relevant nginx config: server { server_name project.dev; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass https://remote.server.com; proxy_set_header Host remote.server.com; proxy_redirect off; } } The problem is that the remote HTTPS server can only accept connections over SSLv3 as can be seen from the following openssl calls. Not working: $ openssl s_client -connect remote.server.com:443 CONNECTED(00000003) 139849073899168:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 226 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Working: $ openssl s_client -connect remote.server.com:443 -ssl3 CONNECTED(00000003) <snip> --- SSL handshake has read 1562 bytes and written 359 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 1024 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 Cipher : RC4-SHA <snip> With the current setup my nginx proxy gives a 502 Bad Gateway when I connect to it in a browser. Enabling debug in the error log I can see the message: [info] 1451#0: *16 peer closed connection in SSL handshake while SSL handshaking to upstream. I tried adding ssl_protocols SSLv3; to the nginx configuration but that didn't help. Does anyone know how I can set this up to work correctly?

    Read the article

  • Setting up Windows network on Xen

    - by samyboy
    I'm trying to install a Windows XP server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny currently hosting around 10 Linux virtual servers. Windows tells me I have a "limited connection". It can't get any DHCP response, nor access other hosts in the network Here is the Xen's client config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '1024' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' Server config (/etc/xen/xend-config.sxp) (network-script network-bridge) (network-script network-dummy) (vif-script vif-bridge) (dom0-min-mem 512) (dom0-cpus 0) (vnc-listen '0.0.0.0') Since I use Debian I had to create a link like this: /etc/xen/qemu-ifup - /etc/xen/scripts/qemu-ifup What did I do wrong? Please tell me if you want some more info (logs, etc)

    Read the article

  • Can I autoregister my clients/servers in local DNS?

    - by Christian Wattengård
    Right now I have a W2k12 server at home that I run as a domain controller. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my servers). I need to rework my home network for several odd reasons, and in this new scenario there is no place for a big honking W2k12 server box. I have a RasPI, and I have other smallish linux boxen I can use. (In a worst case scenario I'll use my NUC, but then I'll be forced to use my home cinema's UPnP-client for media... The HORROR!!) Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubunto flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment? (Bonus comment upvote for naming the naming scheme :P )

    Read the article

  • WEIRD netstat behavior on Windows XP re: www.partypoker.com

    - by tbone
    I really don't know if this is the right place to ask this, but I would really appreciate if someone that is more savvy on Windows XP (Professional) could help me out. For background, I am a 10+ years programmer, so I'm not a total idiot, but I am far from an expert on TCP/IP, etc, and this has me totally confused. When I do a netstat (on Windows XP) I seem to always get a huge amount of www.partypoker.com connections and I can't figure out where they are coming from. A netstat -o shows me that some are coming from PID xxx, which is firefox, but if I kill it, the connections still remain. Some are coming from PID 0, which makes no sense to me. SECOND PROBLEM: One would think you could edit the C:\WINDOWS\system32\drivers\etc\hosts file to block this, but it seems like my machine is ignoring the hosts file! (I have tried with the DNS client service both enabled and disabled, same result). So I just rebooted, killed all my normal programs, and I can't seem to reproduce the problem. If I was a paranoid person, I would think there was some sort of an intelligent trojan running. I am running Windows XP Pro, Kaspersky Antivirus, ccCleaner, and am fully up to date on Windows Update. What gives???? So, I guess my questions are: 1. Is anyone else seeing these wird connections to partypoker.com? 2. Why isn't my hosts filter working? 3. Is there some utility I can run to find out whats happening? I've tried autoruns.exe from sysinternals but don't see anything interesting. Am I the only one with this problem? If there are any additional things you need me to run, let me know.

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • 403 Forbidden error on Mac OSX - Apache and nginx

    - by tlianza
    Hi All, There are a million questions like this on Google, but I haven't found a solution to my problem. The default Apache install on my Mac is giving 403 Forbidden errors for everything (default directory, user home directory, virtual server, etc). After sifting through the config files, I figured I'd give nginx a try. Nginx serves files fine from it's home directory, but it won't serve files from a subfolder of my user directory. I've configured a simple virtual host, and requesting index.html returns a 403-forbidden. The error message in nginx's log file is pretty clear - it can't read the file: 2011/01/04 16:13:54 [error] 96440#0: *11 open() "/Users/me/Documents/workspace/mobile/index.html" failed (13: Permission denied), client: 127.0.0.1, server: local.test.com, request: "GET /index.html HTTP/1.1", host: "local.test.com" I've opened up this directory to everyone: drwxrwxrwx 6 me admin 204B Dec 31 20:49 mobile And all the files in it: $ ls -lah mobile/ total 24 drwxrwxrwx 6 me admin 204B Dec 31 20:49 . drwxr-xr-x 71 me me 2.4K Dec 31 20:41 .. -rw-r--r--@ 1 me me 6.0K Jan 2 18:58 .DS_Store -rwxrwxrwx 1 me admin 2.1K Jan 4 14:22 index.html drwxrwxrwx 5 me admin 170B Dec 31 20:45 nbproject drwxrwxrwx 5 me admin 170B Jan 2 18:58 script And yet, I cannot figure out why the nginx process cannot read index.html. It's running as the "nobody" user, but the permissions are set such that anyone can read them.

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • systemd: enabling cherokee service as a `unit file`

    - by Calvin Cheng
    So I am learning how to use systemd to initialize my services automatically on server reboot. So of course, I first make sure I have systemd and some optional systemd related packages installed. pacman -S systemd initscripts-systemd Installation seems to go well and checking, I can see that systemd and its dependency libsystemd are installed. And the optional package initscripts-systemd is also installed:- [root@li280-195 ~]# pacman -Ss systemd extra/libsystemd 44-5 [installed] systemd client libraries extra/systemd 44-5 [installed] system and service manager extra/systemd-sysvcompat 2-2 sysvinit compat symlinks for systemd community/initscripts-systemd 20120412-1 [installed] Arch specific systemd initialization/bootup scripts for systemd community/systemd-arch-units 20120412-2 Arch specific Systemd unit files Next, I ensure that systemd is loaded up when my server reboots, via grub in grub's /boot/grub/menu.lst file like this:- kernel /boot/vmlinuz root=/dev/xvda ro init=/bin/systemd Rebooting my server to check, all loads up well and I can check that systemd is operational via:- systemctl list-unit-files However, I don't see my cherokee initialization script (which is simply created at /etc/rc.d/cherokee when I installed cherokee earlier via pacman -S cherokee) being listed as one of my unit files. So the question is, how do I do that? How do I put my cherokee initialization script under systemd's control?

    Read the article

  • Wireless disconnects at random after upgrade to Ubuntu 10.4

    - by Daniel Elessedil Kjeserud
    After upgrading my home server from Ubuntu 8.10 to 10.4 my wireless seemingly drops out, even though my IRC client keeps it's connection to the servers, so it looks like the machine just stops taking wireless requests. A ping will give a me this Request timeout for icmp_seq 27 ping: sendto: Host is down After a while the machine just starts responding again, without any interaction from me. When the machine comes back, this is what dmesg gives me [ 18.296288] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.296350] wlan0: deauthenticating from 00:1b:63:22:a4:5f by local choice (reason=3) [ 18.296440] wlan0: direct probe to AP 00:1b:63:22:a4:5f (try 1) [ 18.298697] wlan0: direct probe responded [ 18.298706] wlan0: authenticate with AP 00:1b:63:22:a4:5f (try 1) [ 18.306836] wlan0: authenticated [ 18.306886] wlan0: associate with AP 00:1b:63:22:a4:5f (try 1) [ 18.309396] wlan0: RX AssocResp from 00:1b:63:22:a4:5f (capab=0x411 status=0 aid=2) [ 18.309402] wlan0: associated [ 18.310187] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 18.447742] apm: BIOS version 1.2 Flags 0x03 (Driver version 1.16ac) [ 18.447748] apm: overridden by ACPI. [ 19.163282] padlock: VIA PadLock not detected. [ 28.352022] wlan0: no IPv6 routers present kjes@brin:~$ lspci 02:07.0 Network controller: RaLink RT2561/RT61 rev B 802.11g It's on a wireless network with WPA2, the machine worked without any problems on the same wireless network since Ubuntu 8.10 was the most resent version, and there have been no changes to my network recently. Even though the server drops out, everything else on the network keeps working like normal.

    Read the article

  • How do I correctly SSH port forward using LiveReload on Redhat?

    - by program247365
    Referencing this page: http://feedback.livereload.com/knowledgebase/articles/86280-if-you-edit-files-directly-on-your-server It says you can remotely port forward the LiveReload specific port of 35729, using this command: ssh -L 35729:127.0.0.1:35729 mylogin@myremoteserverIP When I run the -v option, I get: debug1: Local connections to LOCALHOST:35729 forwarded to remote address 127.0.0.1:35729 debug1: Local forwarding listening on ::1 port 35729. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 35729. debug1: channel 1: new [port listener] debug1: channel 2: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: client_input_channel_req: channel 2 rtype [email protected] reply 1 debug1: Connection to port 35729 forwarding to 127.0.0.1 port 35729 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 35729 for 127.0.0.1 port 35729, connect from 127.0.0.1 port 63673, nchannels 4 I thought editing my /etc/services with this line, would work, but it doesn't: livereload 35729/tcp # livereload usage with guard-livereload Every time I attempt to connect with the browser extension, I believe It's getting blocked by my server. What am I missing here? Do I need to edit /etc/services for this to work?

    Read the article

  • PHP-FPM issue on LEMP Stack and WordPress

    - by jw60660
    I'm very much a NGINX and Server Admin beginner. I used this tutorial to install NGINX / PHP / mySQL / WordPress: C3M Digital Tutorial In this tutorial the backend php-cgi setup is configured using fastcgi. php5-fpm was installed during this tutorial: apt-get install nginx-full php5-fpm php5 php5-mysql php5-apc php5-mysql php5-xsl php5-xmlrpc php5-sqlite php5-snmp php5-curl After reading that the NGINX configuration on the WordPress codec was more secure than most tutorials, I decided to use the codex configuration: WordPress NGINX configuration in Codex The Codex configuration uses php-fpm for backend php-cgi. When opening the browser I got a 502 Bad Gateway error. The error log was: "2012/06/10 21:18:27 [crit] 14009#0: *4 connect() to unix:/tmp/php-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 12.3.456.789, server: mywebsite.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", hos t: "mywebsite.com"" In the main NGINX configuration file supplied by the codex I noticed the line starting "server unix:" in the upstream php block which point to the empty directory: # Upstream to abstract backend connection(s) for PHP. upstream php { server unix:/tmp/php-fpm.sock; # server 127.0.0.1:9000; } I checked the folder at /tmp and it was empty. Seems I missed configuring php-fpm to play with NGINX. Can someone point me in the right direction? Much appreciated!

    Read the article

  • Problems installing MySQL-python via yum / missing dependency / incompatibility problem?

    - by bs0
    I have come up against problems installing MySQL-python via yum. Our server is running Centos 5.5 and MySQL Version 5.1.45, Python-dev is installed. Yum complains about the missing dependency libmysqlclient_r.so.15: Missing Dependency: libmysqlclient_r.so.15()(64bit) is needed by package MySQL-python-1.2.1-1.x86_64 (base) The server is up to date and the packages mysql mysql-devel python-devel are installed. The missing dependency is nowhere on the system: locate libmysqlclient /usr/lib64/libmysqlclient.so /usr/lib64/libmysqlclient.so.15 /usr/lib64/libmysqlclient.so.16 /usr/lib64/libmysqlclient.so.16.0.0 /usr/lib64/libmysqlclient_r.so /usr/lib64/libmysqlclient_r.so.16 /usr/lib64/libmysqlclient_r.so.16.0.0 /usr/lib64/mysql/libmysqlclient.a /usr/lib64/mysql/libmysqlclient.la /usr/lib64/mysql/libmysqlclient.so /usr/lib64/mysql/libmysqlclient_r.a /usr/lib64/mysql/libmysqlclient_r.la /usr/lib64/mysql/libmysqlclient_r.so /usr/local/cpanel/lib64/libmysqlclient.so.14 rpm -qa | grep -i mysql MySQL-devel-5.1.45-0.glibc23 MySQL-bench-5.0.89-0.glibc23 MySQL-shared-5.1.45-0.glibc23 MySQL-server-5.1.45-0.glibc23 MySQL-test-5.1.45-0.glibc23 MySQL-client-5.1.45-0.glibc23 The Python version is python-2.4.3-27.el5.x86_64: Python 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 Any suggestions would be greatly appreciated.

    Read the article

  • Tracking down source of duplicate email messages in Outlook / Exchange environment

    - by Ken Pespisa
    I have a few users, who are also Blackberry users, that occasionally have duplicate emails generated from their "mailbox". I put mailbox in quotes because I'm not exactly sure where the duplicates are created. One of these users is in non-cached mode, and the other is in cached mode, and both experience the problem. In fact, the non-cached mode user was originally experiencing the problem while in cached mode, and I made the switch a few weeks ago to attempt to solve the problem. Today I discovered the issue still exists. I'm not sure if the fact that they are blackberry users could be causing the problem at all. I don't see how, but felt I should mention it anyway. Does anyone have ideas on how I might begin to troubleshoot this? I can see in the non-cached user's mailbox "Sent Items" that the message was sent only once. I confirmed the message does not state that there was a conflict and in fact that makes sense because they are in non-cached mode. On the server, we have a mail journaling feature turned on for our third-party mail archiving system, and I can see that that system sees two sent messages. And likewise, the recipient does in fact have two messages in their inbox with consecutive message IDs ([email protected]) and ([email protected]). It would seem to me that the duplicates are generated on the client, but is there a way to tell for sure?

    Read the article

  • Nginx: Serve static files out of a given directory - one level too deep

    - by Joe J
    I'm pretty new to nginx configs. I'm having some difficulty with a pretty basic problem. I'd like to host some static files at /doc (index.html, some images, etc). The files are located in a directory called /sites/mysite/proj/doc/. The problem is, is that with the nginx config below, nginx tries to look for a directory called "/sites/mysite/proj/doc/doc". Perhaps this can be fixed by setting the root to /sites/mysite/proj/, but I don't want to potentially expose other (non-static) assets in the proj/ directory. And for various reasons, I can't really move the doc/ directory from where it is. I think there is a way to use a Rewrite rule to solve this situation, but I don't really understand all the parts, so having some difficulty formulating the rule. rewrite ^/doc/(.*)$ /$1 permanent; I've also included a working example of hosting files out of a /sites/mysite/htdocs/static/ directory. > vim locations.conf location /static { root /sites/mysite/htdocs/; access_log off; autoindex on; } location /doc { root /sites/mysite/proj/doc/; access_log on; autoindex on; } 2011/11/19 23:49:00 [error] 2314#0: *42 open() "/sites/mysite/proj/doc/doc" failed (2: No such file or directory), client: 100.100.100.100, server: , request: "GET /doc HTTP/1.1", host: "myhost.com" Does anyone have any ideas how I might go about serving this static content? Any help is much appreciated. Thanks, Joe

    Read the article

  • Fixed and dynamic IPs in ISC DHPD lead to double lease

    - by GorillaPatch
    I would like to have a small dynamic adress part and the most clients are assigned a fixed IP adress. My dhcpd.conf looks like this: use-host-decl-names on; authoritative; allow client-updates; ddns-updates on; # Einstellungen fuer DHCP leases default-lease-time 3600; max-lease-time 86400; lease-file-name "/var/lib/dhcpd/dhcpd.leases"; subnet 192.168.11.0 netmask 255.255.255.0 { ddns-updates on; pool { # IP range which will be assigned statically range 192.168.11.1 192.168.11.240; deny all clients; } pool { # small dynamic range range 192.168.11.241 192.168.11.254; # used for temporary devices } } group { host pc1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.11.11; } } The motivation for the pool declaration with deny all hosts comes from the ISC DHCPD homepage http://www.isc.org/files/auth.html This will allow hosts to be first added to the network, where they will receive a temporary IP from the 241-254 adress range and then later write an explicit host declaration. Upon next connect it will receive the right configuration. The problem is that I am getting error messages that 192.168.11.13 has a dynamic and a static lease. I am a bit confused as I expected the pool declaration with deny all clients would not count as dynamic. Dynamic and static leases present for 192.168.11.13. Remove host declaration pc1 or remove 192.168.11.13 from the dynamic address pool for 192.168.11.0/24 Is there a way to have the DHCP server send an DHCPNA to clients if they have a host statement and retain this dynamic range?

    Read the article

  • RDP or SSH connection trough Windows 2008 server VPN hang after a while

    - by xt4fs
    I have been experiencing a very strange issue with our VPN setup on Windows Server 2008. That server is running as a Xen Virtual Machine. We use it for two purposes, permit our mobile workers to connect to another server hosted somewhere else that only allow that ip, and use it to RDP or ssh to many other virtual machine on the same server. The server has no performance issue and still a load of memory free. All other virtual machine has no problem whatsoever. Many of those virtual machine have public IP (web servers) and all their firewall are set to allow only ssh connection or RDP connection from their local interface. When I am connecting directly with either ssh or RDP to one of the other virtual machine everything run without any issues. However, when I am doing so through the VPN after some time the connection just hang, it usually continue after some time (5 or 10 minutes). It seems as more there is network usage more often it happen to a point where it is completely unusable. The worst thing I can do to hang it faster is to actually ping the vpn client IP from the local network, after some time the latency increase until it hang. This happen even if I do RDP to the local ip of the VPN server trough the VPN. The server report no problem and if I disconnect to the vpn and reconnect right away everything is alright. There is nothing wrong in the VPN server log. I have taught at the beginning that it could have been an issue with the Host server so I try to RDP,ssh directly to the guest and I have experience no issue while doing this, so it really seems to be a problem with the VPN server on Windows server 2008. Another very weird thing is it does not seems to be of any issue if you only do Internet (NAT) without trying to connect to any local ips.

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >