Search Results

Search found 9923 results on 397 pages for 'node mongodb native'.

Page 319/397 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Intermittent extrememly long response times when downloading documents

    - by pap
    I have a Java web application running om Tomcat 7 with an Apache httpd 2.2 fronting with mod_jk/AJP. One part of the application is serving files (up to 4mb size). Now, normally this all runs very smooth with stable, low response-times. However, in rare instances (<0.1% of downloads), the downloadtime will go beyond 1 minute. After activating the ThreadStuckValve in Tomcat, I can see that the long responses seem to be stuck at org.apache.tomcat.jni.Socket.sendbb(Native method) i.e network I/O. At most, these long-running downloads take 5 minutes, which I strongly suspect is because of the default 300 second timout in Apache 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html, "TimeOut directive"). To me, this looks like network problems. The Apache timeout (if that is what is kicking in at the 5 minute mark) indicates that ACK packets are not being transmitted correctly. My questions are what could be causing this? Closed browser at receiving end but socket not signaled as closed properly? Packet loss or some other network failure in transit? Where would I start troubleshooting this? We're running Tomcat and Apache on Windows server 2008-R2 in a vmware virtualized server.

    Read the article

  • UPS power requirements for server

    - by captainentropy
    Greetings! So, I just placed an order for a new server. The company recommended that I get a 3000W UPS. (!) As best as I could I calculated the following wattage consumption based on benchmarked data or datasheets provided by the manufacturers of each component: number watts **total watts** MoBo 1 240 240 CPUs (E5540) 2 80 160 RAID cards (3ware) 2 18 36 RAM (6x4GB) 6 3 18 DVD drive 1 7 7 floppy 1 2 2 RE4 drives 8 7 56 WD20 drives 8 6 48 Intel X25 SSD 2 0.15 0.3 total = 567 So that is for the PSU requirements only. The PSUs in the machine are a 720W for the master node and 800W each for two subsystems. That's a total of 2320W that can be delivered by these PSUs. But that is 4X the amount being consumed, at most, by the components. I didn't count case fans or the eSATA card (3W maybe?) or what the PSUs themselves require but assuming I double or triple my calculations I'm not even remotely close to the 3000W UPS I was suggested to get. They run at least $1100. I could get a 2000W for about $750 or a 1500W for $450 and still be well over my estimated power need. I don't think I need a whole lot of run time in the case of a power outage, maybe 20 minutes max, enough time to shutdown if the power doesn't come on within 5-10 minutes. Any thoughts? Am I off on my calculations? Did I overlook something major? If so what are your suggestions for a UPS? Thanks!

    Read the article

  • IPTABLE & IP-routed netwok solution for HOST net and VM's subnet

    - by Daniel
    I've got ProxmoxVE2.1 ruled KVM node on Debian and bunch of VM's guests machine. That is how my networking looks like: # network interface settings auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address 175.219.59.209 gateway 175.219.59.193 netmask 255.255.255.224 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp And I've got two working subnet solution auto vmbr0 iface vmbr0 inet static address 10.10.0.1 netmask 255.255.0.0 bridge_ports none bridge_stp off bridge_fd 0 post-up ip route add 10.10.0.1/24 dev vmbr0 This way I can reach internet, to resolve outside hosts, update and download everything I need but can't reach one guest VM out of any other VM's inside my network. The second solution allows me to communicate between VM's: auto vmbr1 iface vmbr1 inet static address 10.10.0.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.0.0/24' -o vmbr1 -j MASQUERADE I can even NAT internal addresses: -t nat -I PREROUTING -p tcp --dport 789 -j DNAT --to-destination 10.10.0.220:345 My inexperienced mind is ready to double VM's net adapters: one for the first solution and another - for second (with slightly different adresses) but I'm pretty sure that it's a dumb way to resolve the problem and everything can be resolved via iptables/ip route rules that I can't create. I've tried a dozen of "wizard manuals" and "howto's" to mix both solution but without success. Looking for an advice (and good reading links for networking begginers).

    Read the article

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • Under FreeBSD, can a VLAN interface have a smaller MTU than the primary interface?

    - by larsks
    I have a system with two physical interfaces, combined into a LACP aggregation group. That LACP channel has two VLANs, one untagged (the "native vlan") and one using VLAN tagging. This gives us: lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4> ether 00:25:90:1d:fe:8e inet 10.243.24.23 netmask 0xffffff00 broadcast 10.243.24.255 media: Ethernet autoselect status: active laggproto lacp laggport: em1 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> laggport: em0 flags=1c<ACTIVE,COLLECTING,DISTRIBUTING> vlan0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=3<RXCSUM,TXCSUM> ether 00:25:90:1d:fe:8e inet 10.243.16.23 netmask 0xffffff80 broadcast 10.243.16.127 media: Ethernet autoselect status: active vlan: 610 parent interface: lagg0 Is it possible to set a 9K MTU on lagg0 while preserving the 1500 byte MTU on vlan0? Normally I would simply try this out, but this is actually on a vendor-supported platform and I am loathe to make changes "behind the back" of their administration interface. This system is roughly FreeBSD 7.3.

    Read the article

  • Percona-server time out on /etc/init.d/mysql start

    - by geekmenot
    Every time I start mysql, using /etc/init.d/mysql start or service mysql start, it always times out. * Starting MySQL (Percona Server) database server mysqld [fail] However, I can get into mysql. Just wanted to know if there is a problem with the install because it happens all the time, not a one off error. mysql-error.log shows: 121214 11:25:56 mysqld_safe Starting mysqld daemon with databases from /data/mysql/ 121214 11:25:56 [Note] Plugin 'FEDERATED' is disabled. 121214 11:25:56 InnoDB: The InnoDB memory heap is disabled 121214 11:25:56 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121214 11:25:56 InnoDB: Compressed tables use zlib 1.2.3 121214 11:25:56 InnoDB: Using Linux native AIO 121214 11:25:56 InnoDB: Initializing buffer pool, size = 14.0G 121214 11:25:58 InnoDB: Completed initialization of buffer pool 121214 11:26:01 InnoDB: Waiting for the background threads to start 121214 11:26:02 Percona XtraDB (http://www.percona.com) 1.1.8-rel29.2 started; log sequence number 9333955393950 121214 11:26:02 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 121214 11:26:02 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 121214 11:26:02 [Note] Server socket created on IP: '0.0.0.0'. 121214 11:26:02 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.005163' at position 624540946, relay log '/data/mysql/mysql-relay-bin.000043' position: 624541092 121214 11:26:02 [Note] Slave I/O thread: connected to master '[email protected]:3306',replication started in log 'mysql-bin.005180' at position 823447620 121214 11:26:02 [Note] Event Scheduler: Loaded 0 events 121214 11:26:02 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.28-29.2-log' socket: '/data/mysql/mysql.sock' port: 3306 Percona Server (GPL), Release 29.2

    Read the article

  • How to reproduce the behavior of Mac OS X's dead keys on Windows 7?

    - by Pascal Qyy
    I'm French, but I've chosen to take a QWERTY keyboard for my MacBook Pro for many reasons: first of all, the AZERTY keyboard is not at all ergonomic because it has no numeric keypad, and I must use MAJ or CAPS LOCK to access to the numeric keys ; secondly, I've bought this mac for development ; and chars {, }, etc., are not directly accessible on the Apple AZERTY keyboard the last thing is that: the diacritics are VERY easy to produce on an Apple keyboard with Mac OS X : ? + c for a ç, for example, and many dead keys easy to use (e.g. ? + e, then e give you an é. So, I have no difficulties to write in my native language with this keyboard under Mac OS X. BUT, when I boot on Windows 7's Boot Camp partition, or when I use applications from it through VMware Unity, it is no longer the same comfort! Without numeric keypad, it's impossible to use it for produce specials characters (e.g.: Alt + 0231 for the ç) I've tried many solutions, like auto replacement in Microsoft Office (e.g.: ,,c being replaced by ç), but for all my diacritics, I must type a space, then a back space before the replacement work. I've also tried third party software, as Texter, but it is very buggy and don't work properly (or don't work at all) in many case! So, is there a solution somewhere, to have this Mac OS X's nice and comfortable way of producing diacritics for Windows 7? Thank in advance for your help and your time!

    Read the article

  • Fresh 12.04 Install - mySQL not starting

    - by Lee Armstrong
    I have a freshly installed Ubuntu 12.04 x64 server and I installed Percona server from their official repositories. Trouble is it will not start! mysql-error.log shows nothing obvious. 121129 12:16:54 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/ 121129 12:16:54 [Note] Plugin 'FEDERATED' is disabled. 121129 12:16:54 InnoDB: The InnoDB memory heap is disabled 121129 12:16:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121129 12:16:54 InnoDB: Compressed tables use zlib 1.2.3 121129 12:16:54 InnoDB: Using Linux native AIO 121129 12:16:54 InnoDB: Initializing buffer pool, size = 12.0G 121129 12:16:54 InnoDB: Completed initialization of buffer pool 121129 12:16:54 InnoDB: highest supported file format is Barracuda. 121129 12:16:55 InnoDB: Waiting for the background threads to start 121129 12:16:56 Percona XtraDB (http://www.percona.com) 1.1.8-rel29.1 started; log sequence number 1598476 121129 12:16:56 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 121129 12:16:56 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 121129 12:16:56 [Note] Server socket created on IP: '0.0.0.0'. 121129 12:16:56 [Note] Event Scheduler: Loaded 0 events 121129 12:16:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.28-29.1-log' socket: '/var/run/mysqld/mysql.sock' port: 3306 Percona Server (GPL), Release 29.1 121129 12:16:56 [Note] Event Scheduler: scheduler thread started with id 1 And the syslog shows... Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: The socket file is being created and I can access the server NOT using the socket using mysql -h 127.0.0.1 -P 3306 -u root --pPASSWORD

    Read the article

  • Windows 7 Install: No drives were found

    - by Albert Bori
    I was building a computer for my wife with an older SATA hard drive that I had lying around, and when attempting to do a new install of Windows 7 on it, the installer says: "No drives were found. Click Load Driver to provide a mass storage driver for installation." I ran the diskpart command: list volume, and it showed up as "Raw". So, I formatted it to NTFS and then it showed up as a healthy drive in diskpart. I also ran check disk on it with no errors. Windows 7 installer STILL can't find the drive. As far as BIOS settings, I have tried "Native IDE", AHCI, and Both AHCI/IDE mode (SATA slots 0-2 AHCI, 3-4 IDE). I tried all combinations... still "no drives were found". At this point, I'm just scratching my head. Using the installation dos window, I can see and talk to the drive just fine, but the installer just doesn't see it at all. I've even written folders and files to the drive, and it still "can't be seen". Any help would be great. Items of interest: Motherboard model: Gigabyte GA-A75M-UD2H - BIOS Version F5 (latest) Hard drive model: 80GB Seagate Barracuda 7200.7 ST380817AS (no other drives) Installing Windows 7 using a FAT32 formatted USB Drive, which I've used for other installs

    Read the article

  • Implications of Multiple JobTracker nodes in a Hadoop cluster?

    - by Jim Dennis
    I get the impression that one can, potentially, have multiple JobTracker nodes configured to share the same set of MR (TaskTracker) nodes. I know that, conventionally, all the nodes in a Hadoop cluster should have the same set of configuration files (conventionally under /etc/hadoop/conf/ --- at least for the Cloudera Distribution of Hadoop (CDH). Can we define multiple Job Trackers in mapred-site.xml? Something like: <configuration> <property> <name>mapred.job.tracker</name> <value>jt01.mydomain.not:8021</value> </property> <property> <name>mapred.job.tracker</name> <value>jt02.mydomain.not:8021</value> </property> ... </configuration> Or is there some other allowed syntax for this? What are the implications of doing this. Does each JobTracker get information about the load on each TaskTracker node. In other words can the two JobTracker co-ordinated their scheduling across the TT nodes only based on the gossip information from the TTs or would they need to talk to one another? Is this documented anywhere?

    Read the article

  • Proxychains, Tortunnel, Privoxy: cannot connect() to port

    - by Benjamin
    Hi all, I'm trying to do an nmap scan through tor using tortunnel, privoxy and proxychains like explained in the following video: http://vimeo.com/6238958 I'm getting rather weird results. I can successfully perform any SYN scan on any port. However as soon as I try to do connect() scans, proxychains cannot connect itself to all ports. In other words, I can perform connect() scans to port 80: proxychains nmap -P0 -A -sV www.zzz.com -p80 but not port 21: proxychains nmap -P0 -A -sV www.zzz.net -p21 I get the following error: Starting Nmap 4.62 ( http://nmap.org ) at 2010-06-02 08:34 UTC ProxyChains-2.1 (http://proxychains.sf.net) random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 random chain (1):....127.0.0.1:5060....can't connect to..113.I2.1W1.YY:21 My only guess would be that the exit node I'm using does not allow connections to port 21. Would that be correct? How could I fix it? Thanks for your time.

    Read the article

  • Converting PDF eBooks into a Kindle format

    - by Ender
    Over the past couple of years I've amassed quite a collection of guides, tutorials and ebooks in PDF format. A lot of these are quite useful for work, especially PDF documentation, and rather than have to be at a computer every time I want to read how to do something in Sitecore or to read through a software testing ebook I'd like to do it on my brand-spanking-new Kindle. However, even though there is now a native PDF reader on the Kindle due to the nature of PDF's they are practically unreadable. The text doesn't wrap due to how PDF's are sized and so far after a bunch of Google searches I've yet to find a viable solution to get my PDF's converted into a readable Kindle format. Sometimes these books have code or pictures/tables in them, but most of the time they're text-heavy and to be honest I'd be surprised if there wasn't a free tool to handle the converting of PDF to one of the (seemingly many) Kindle formats. So, can anyone help me out with this? EDIT: I've tried Calibre, and have checked their forums to play with some of the advanced settings, yet the solutions available seem to be extremely poor, especially if the book you're attempting to read contains equations, code, or anything outside of plain text. I've also tried Amazon's conversion service, which wasn't much help with such documents. The best way I have found so far is to build the entire thing over again in ePub or RTF format and convert to MOBI from there. This works for text-heavy books with tables, but anything technical still isn't covered. Can anyone help with this?

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Creating security permissions for a non-domain-member user in Windows Server 2008

    - by Overhed
    Hello everyone, I apologize in advance for incorrect use of terminology, as I'm not an IT person by trade. I'm doing some remote work via a VPN for a client and I need to add some DCOM Service security permissions for my remote user. Even though I'm on the VPN, the request for access to the DCOM service is using my PCs native user (and since I'm running Vista Home Premium it looks something like: PC-NAME\Username). The request for access comes back with access denied and I can not add this user to the security permissions as it "is not from a domain listed in the Select Location dialog box, and is therefore not valid". I'm pretty stuck and have no clue what kind of steps I need to do here. Any help would be appreciated, thanks in advance. EDIT: I have no control over what credentials are being passed in to the server by my computer. This scenario is occurring in an installation wizard that has a section which requests you point it to the machine running the "server" version of the software I'm installing (it then tries to invoke the relevant COM service, but my user does not have "Remove Activation Permissions" on that service, so I get request denied).

    Read the article

  • How can I make IPv6 on OpenVPN work using a tap device?

    - by Lekensteyn
    I've managed to setup OpenVPN for full IPv4 connectivity using tap0. Now I want to do the same for IPv6. Addresses and network setup (note that my real prefix is replaced by 2001:db8): 2001:db8::100:0:0/96 my assigned IPv6 range 2001:db8::100:abc:0/112 OpenVPN IPv6 range 2001:db8::100:abc:1 tap0 server side (set as gateway on client) 2001:db8::100:abc:2 tap0 client side 2001:db8::1:2:3:4 gateway for server Home laptop (tap0: 2001:db8::100:abc:2/112 gateway 2001:db8::100:abc:1/112) | | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | | router | | OpenVPN INTERNET | eth0 | /tap0 VPS (eth0:2001:db8::1:2:3:4/64 gateway 2001:db8::1) (tap0: 2001:db8::100:abc:1/112) (running Debian 6; OpenVPN 2.1.3-2) The server has both native IPv4 and IPv6 connectivity, the client has only IPv4. I can ping6 to and from my server over OpenVPN, but not to other machines (for example, ipv6.google.com). Using tcpdump on both the server and client, I can see that packets are actually transferred over tap0 to eth0. The router (2001:db8::1) send a neighbor solicitation for the client (2001:db8::100:abc:2) to eth0 after it receives the ICMP6 echo-request. The server does not respond to that solicitation, which causes the ICMP6 echo-request not be routed to the destination. How can I make this IPv6 connection work?

    Read the article

  • Play framework 2.2 using Upstart 1.5 (Ubuntu 12.04)

    - by Leon Radley
    I'm trying to get Play 2.2 working with upstart. I've been running Play 2.x with upstart since it's release and it's never been a problem. But since the release of 2.2 and the change to http://www.scala-sbt.org/sbt-native-packager/ play doesn't want to start any more. Here's the config I'm using description "PlayFramework 2.2" version "2.2" env APP=myapp env USER=myuser env GROUP=www-data env HOME=/home/myuser/app env PORT=9000 env ADDRESS=127.0.0.1 env CONFIG=production.conf env JAVAOPTS="-J-Xms128M -J-Xmx512m -J-server" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 10 5 expect daemon # If you want the upstart script to build play with sbt pre-start script chdir $HOME sbt clean compile stage -mem $SBTMEM end script exec start-stop-daemon --pidfile ${HOME}/RUNNING_PID --chuid $USER:$GROUP --exec ${HOME}/bin/${APP} --background --start -- -Dconfig.resource=$CONFIG -Dhttp.address=$ADDRESS -Dhttp.port=$PORT $JAVAOPTS I've changed the JAVAOPTS to include the -J- and I've also changed the path to use the new startscript located in the /bin/ dir. I've read that upstart 1.4 has setuid and setguid. I've tried removing the start-stop-daemon but I haven't got that working either. Any suggestions would be appreciated.

    Read the article

  • dom0 enable IPv6 for guests

    - by user98651
    I am looking at deploying IPv6 to my virtual machines. Right now I have v6 working great on the dom0 using a 6in4 provided by Hurricane Electric as I do not have native v6. However, I would like to distribute some of the /48 I am receiving to the domUs (/64 per machine would be ideal, but I am open to your suggestions). Static configuration on the domU side is fine. All I want to accomplish is getting the traffic to pass through the dom0 to the domU. To say the least, I'm still trying to wrap my head around all the virtual interfaces and bridges Xen creates. Yes, I have Googled around for this a bit and have not found anything great. I tried using two "vif-route6" bash scripts with no luck (possibly due to my ignorance with Xen networking). I am still stuck (mainly in how to configure the dom0). I would like to imagine this problem is relatively easy to solve and I look forward to your suggestions and help! Edited post to clarify my end goal: getting IPv6 to domU guests. I am completely open to suggestions but am hoping for something other than setting up a tunnel for every guest.

    Read the article

  • Openvpn - stuck on Connecting

    - by user224277
    I've got a problem with openvpn server... every time when I trying to connect to the VPN , I am getting a window with login and password box, so I typed my login and password (login = Common Name (user1) and password is from a challenge password from the client certificate. Logs : Jun 7 17:03:05 test ovpn-openvpn[5618]: Authenticate/Decrypt packet error: packet HMAC authentication failed Jun 7 17:03:05 test ovpn-openvpn[5618]: TLS Error: incoming packet authentication failed from [AF_INET]80.**.**.***:54179 Client.ovpn : client #dev tap dev tun #proto tcp proto udp remote [Server IP] 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert user1.crt key user1.key <tls-auth> -----BEGIN OpenVPN Static key V1----- d1e0... -----END OpenVPN Static key V1----- </tls-auth> ns-cert-type server cipher AES-256-CBC comp-lzo yes verb 0 mute 20 My openvpn.conf : port 1194 #proto tcp proto udp #dev tap dev tun #dev-node MyTap ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/VPN.crt key /etc/openvpn/keys/VPN.key dh /etc/openvpn/keys/dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt #push „route 192.168.5.0 255.255.255.0? #push „route 192.168.10.0 255.255.255.0? keepalive 10 120 tls-auth /etc/openvpn/keys/ta.key 0 #cipher BF-CBC # Blowfish #cipher AES-128-CBC # AES #cipher DES-EDE3-CBC # Triple-DES comp-lzo #max-clients 100 #user nobody #group nogroup persist-key persist-tun status openvpn-status.log #log openvpn.log #log-append openvpn.log verb 3 sysctl : net.ipv4.ip_forward=1

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Setup: Eclipse in Ubuntu with Apache2 and Subversion

    - by Ricalsin
    Trying to setup Eclispe. I am running ubuntu 10.10 (Maverick). Apache2.2.16 Subversion 1.6.12 The Eclipse help/about/installed software says: Eclipse Platform 3.5.2 Subclipse 1.0.0 Version Control with Subversion 1.1.1 The Subclips wiki I followed is here I have installed the libsvn-java app as discussed. I added the line "-Djava.library.path=/usr/lib/jni" to the eclipse.ini file I checked the Eclipse help/about/confirguration settings and both of these lines are listed: eclipse.vmargs=-Djava.library.path=/usr/lib/jni java.library.path=/usr/lib/jni I checked that those files are in those directories. Still, when I check the preferencesteamsvn an error dialog shows: Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl.1 in java.library.path Incompatible JavaHL library loaded 1.3.x or later required I followed the "Testing JavaHL libraries" troubleshooting section at the bottom of the wiki: I downloaded the tarbal and ran it in a folder on my desktop with no problems. Then, I followed the instructions and placed that file INSIDE the path (usr/lib/jni/testJavaHL) and ran it from there. There are 50 tests performed and each one of them came back with this same error (posting only one for brevity): 50) testCommitRevprops(org.tigris.subversion.javahl.BasicTests)java.io.FileNotFoundException: /usr/lib/jni/testJavaHL/local_tmp/greek_files/iota (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:209) at java.io.FileOutputStream.<init>(FileOutputStream.java:160) at org.tigris.subversion.javahl.WC.materialize(WC.java:70) at org.tigris.subversion.javahl.SVNTests.buildGreekFiles(SVNTests.java:303) at org.tigris.subversion.javahl.SVNTests.setUp(SVNTests.java:222) at org.tigris.subversion.javahl.RunTests.main(RunTests.java:111) FAILURES!!! Tests run: 50, Failures: 0, Errors: 50 Any ideas as to how/why the "local_tmp/greek_files/iota" is appended to the directory? I assume that's my problem.. I'm also having a problem with newrepository location = ...as the directory location of my svn repository is one level above the home directory - which is prepended to whatever I place in the dialog box - resulting in this error: svn: '/home/ricalsin/file:/home/svn' does not exist Thank you for any help.

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Strange sound behavior

    - by caarlos0
    First of all, sorry for the tittle. English isn't my native language, and I can't find a word to describe the strange behavior I'm getting here. In the most simple explanation, is like sound keeps going down and up again... Think in a kid with that old radios that have a circled volume button. Think like these kid keeps "turning" the volume button. That is the behavior I'm getting here. At first, I believed that it was a pulseaudio issue, but, it isn't. I followed the wiki part I think that should be my problem, but it didn't work. After that, as I'm using XFCE, I didn't really need pulseaudio, so I removed it and stays with a clean alsa, hopping that will fix my problem. Sweet mistake. It really looks like a kid looking for trouble. I believed it worked, and, suddenly, here is the same issue again. BTW: I have a full-upgraded testing system (yeah, I upgraded to testing hopping for new pulseaudio version which fix the issue), no pulseaudio at all, just xfce, started with startxfce. What can I do to fix this? It's extremely annoying... sometimes I just want to throw my laptop in the wall because of this. Any extra info you need, please, tell me. Thanks in advance -- EDIT: My alsamixer is like this: And here is a video with the sound behavior.

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >