Search Results

Search found 15633 results on 626 pages for 'mysql cluster'.

Page 268/626 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Apache Web Server character encoding

    - by OBY
    I've recently transferred my webapp from my localhost (LH) to a VPS, and have had hebrew chars-encoding probs since. Whenever I send a request with a heb-char it results in "?????" saved to the DB. My LH config was tomcat6, MySQL, and centOS 6.2, opened to the web. In the VPS env I'm behind an Apache Web Server, and the rest is quite the same (though I haven't done anything to its installation). Please note I already have had this problem before, on my LH when the request was sent from IE/chrome (not FF!). The solution was to apply a filter on the the context and change the char-type to UTF-8. My webapp content char-encode is utf-8, MySql server set to utf8 using charset utf8;, and my centOS set to iw_IL.UTF8 using export LANG=iw_IL.UTF8. When I use locale the bash output seems to be set correctly. Any suggestions?

    Read the article

  • XAMPP won't start because I'm already running MySQL on port 3306

    - by JellicleCat
    I'm trying to start XAMPP, but it fails with only the word 'Busy' in the Control Panel console. I ran xampp/install/portcheck.bat to see if the ports were available, and I see that port 3306 is busy (because I'm running MySQL as a service on it). I suppose that XAMPP wants to take over and run MySQL on that port. Can anyone tell me how to get XAMPP to just make use of the existing server and not try to control it itself?

    Read the article

  • Clustering with Vsphere and Apache load balacing

    - by Anonymous2011
    I was looking at Vsphere to automatically load balance my apache web servers, and mysql servers but will this actually do the job? I know it says it does auto load balancing but not actually sure it quite means what I want to achieve. Is there an easier way to set up a php-apache and mysql cluster within virtual machines? Or are there any guides to clustering? (I have tried googling without much luck) Any help towards understanding/setting up clustering and load balancing within virtual machines appreciated.

    Read the article

  • PHP and MySQL on IIS7: can't find php_mcrypt.dll in php.ini

    - by user46250
    I have installed PHP with Microsoft Web PI. Then I installed mysql. According to http://learn.iis.net/page.aspx/353/install-and-configure-mysql-for-php-applications-on-iis-7/ I have to Uncomment the following lines by removing the semicolon: extension=php_mysqli.dll extension=php_mbstring.dll extension=php_mcrypt.dll But there is no extension=php_mcrypt.dll in php.ini installed by web PI so should I add it by hand then where ? and where should I check that php_mcrypt.dll exists ? Seems nobody knows, should better ask on Microsoft forum ?

    Read the article

  • mysqldump and wamp

    - by Adam
    I am running a wamp server and trying to use mysqldump to backup a mysql database I have. The following is the PHP code I am using to run mysqldump. exec("mysqldump backup -u$user -p$pass > $sql_file"); When I run the script the page just loads inifnately and the backup is not created. A blank file is being created so I know something is happening. Extra info: exec() is not disabled PHP is not running in safe mode Any ideas?? Win XP, WAMP, MYSQL 5.0.51b

    Read the article

  • Can't access phpMyAdmin because of host, username and password

    - by Engprof
    everyone. When I try to access phpMyAdmin on Uniform Server I get the following error messages: " #1045 - Access denied for user 'root'@'localhost' (using password: YES) " " phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. " The funny thing is my username and password are both set to "root" and I have changed the IP address in the httpd.conf file to my Unique IP adddress, so I still don't know what the problem is. Could somebody please help me out? Any help would be much appreciated.

    Read the article

  • Is it safe to use up all memory on linux server, not leaving anything for the cache?

    - by Temnovit
    I have a CentOS server fully dedicated to MySQL 5.5 (with innodb tables mostly). Server has 32 GB RAM, SSD disks, and avarage memory usage looks like this: So about 25GB is in use and about 6.5GB is cached. I am experiencing performance problems with WRITE queries, so I was thinking, is this the optimal cache size? I might increase innodb buffer size, so that linux cache would become smaller, or decrease it, so it would be bigger. What is the optimal used/cached memory balance for busy MySQL server on linux?

    Read the article

  • Upgrading Fedora to Current Version - Dependent Applications Concerns

    - by nicorellius
    Is it possible to upgrade to Fedora 11 and still keep the server functional (it's running Fedora Core release 6 now, with MySQL 5.0, PHP 5.1 and Apache 2.2)? I have a Customer Relationship Management system on there and it is required on a daily basis. I am trying to figure out if I should upgrade (and whether that will cause huge problems) or if I should start over with a new OS (like Ubuntu) and install the CRM fresh, with updated version of MySQL, PHP, Apache, etc. I ask because I am not really the sysadmin, but have but put in charge of this server.

    Read the article

  • How to copy a file to a remote server using the command line?

    - by cool_cs
    I am trying to copy a file from my desktop to my remote server using the sudo command. I am doing this from the remote machine since I know the password for this machine and I do not have a password for my local machine. sudo scp donj@localhost:/Desktop/my.cnf user@remotemachine:/app/MySQL/my.cnf This does not work however. I want to overwrite the my.cnf file in the MySQL directory. I tried the su command but I do not have the password to become a super user.

    Read the article

  • Access denied for user 'root@localhost' (using password:NO)

    - by murgatroid99
    I am attempting to install a network management package called cacti onto Ubuntu running under Windows Virtual PC. I attempted to install MySQL as it is one of cacti's dependencies. I can install and start the MySQL server, but whenever I try to access it in any other way, such as to change the password, I get the error message Access denied for user 'root@localhost' (using password:NO). I would like to know what is causing this and how to fix it. Edit: (just in case my comments are not visible) The answers from HD and Devin Ceartas did not work for me.

    Read the article

  • Should we regularly schedule mysqlcheck (or databsae optimization)

    - by scatteredbomb
    We run a forum with some 2 million posts and I've noticed that if left untouched the overhead in the mySQL (as listed in phpMyAdmin) can get quite large (hundreds of megabytes). I'm wondering if scheduling a normal mysqlcheck to optimize the tables is good practice? Any reason not to do it, say, once a week at an off-peak hour? There was a time over the summer where our site was constantly crashing because mysql was using up all resources. That's when I noticed the huge amount of overhead and optimized the database and haven't had any problems since then with stability. I figured if that was helping alleviate the issues, I should just setup a cron to automatically do this.

    Read the article

  • Access denied for user 'root@localhost' (using password:NO)

    - by murgatroid99
    I am attempting to install a network management package called cacti onto Ubuntu running under Windows Virtual PC. I attempted to install MySQL as it is one of cacti's dependencies. I can install and start the MySQL server, but whenever I try to access it in any other way, such as to change the password, I get the error message Access denied for user 'root@localhost' (using password:NO). I would like to know what is causing this and how to fix it. Edit: (just in case my comments are not visible) The answers from HD and Devin Ceartas did not work for me.

    Read the article

  • Fortran recursion segmentation faults

    - by ConnorG
    Hey all - I have to design and implement a Fortran routine to determine the size of clusters on a square lattice, and it seemed extremely convenient to code the subroutine recursively. However, whenever my lattice size grows beyond a certain value (around 200/side), the subroutine consistently segfaults. Here's my cluster-detection routine: RECURSIVE SUBROUTINE growCluster(lattice, adj, idx, area) INTEGER, INTENT(INOUT) :: lattice(:), area INTEGER, INTENT(IN) :: adj(:,:), idx lattice(idx) = -1 area = area + 1 IF (lattice(adj(1,idx)).GT.0) & CALL growCluster(lattice,adj,adj(1,idx),area) IF (lattice(adj(2,idx)).GT.0) & CALL growCluster(lattice,adj,adj(2,idx),area) IF (lattice(adj(3,idx)).GT.0) & CALL growCluster(lattice,adj,adj(3,idx),area) IF (lattice(adj(4,idx)).GT.0) & CALL growCluster(lattice,adj,adj(4,idx),area) END SUBROUTINE growCluster where adj(1,n) represents the north neighbor of site n, adj(2,n) represents the west and so on. What would cause the erratic segfault behavior? Is the cluster just "too huge" for large lattice sizes?

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • Unable to receive any emails using postfix, dovecot, mysql, and virtual domain/mailboxes

    - by stkdev248
    I have been working on configuring my mail server for the last couple of weeks using postfix, dovecot, and mysql. I have one virtual domain and a few virtual mailboxes. Using squirrelmail I have been able to log into my accounts and send emails out (e.g. I can send to googlemail just fine), however I am not able to receive any emails--not from the outside world nor from within my own network. I am able to telnet in using localhost, my private ip, and my public ip on port 25 without any problems (I've tried it from the server itself and from another computer on my network). This is what I get in my logs when I send an email from my googlemail account to my mail server: mail.log Apr 14 07:36:06 server1 postfix/qmgr[1721]: BE01B520538: from=, size=733, nrcpt=1 (queue active) Apr 14 07:36:06 server1 postfix/pipe[3371]: 78BC0520510: to=, relay=dovecot, delay=45421, delays=45421/0/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied) Apr 14 07:36:06 server1 postfix/pipe[3391]: 8261B520534: to=, relay=dovecot, delay=38036, delays=38036/0.06/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3378]: 63927520532: to=, relay=dovecot, delay=38105, delays=38105/0.02/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3375]: 07F65520522: to=, relay=dovecot, delay=39467, delays=39467/0.01/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3381]: EEDE9520527: to=, relay=dovecot, delay=38361, delays=38360/0.04/0/0.15, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3379]: 67DFF520517: to=, relay=dovecot, delay=40475, delays=40475/0.03/0/0.16, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3387]: 3C7A052052E: to=, relay=dovecot, delay=38259, delays=38259/0.05/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3394]: BE01B520538: to=, relay=dovecot, delay=37682, delays=37682/0.07/0/0.11, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:07 server1 postfix/pipe[3384]: 3C7A052052E: to=, relay=dovecot, delay=38261, delays=38259/0.04/0/1.3, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection rate 1/60s for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection count 1 for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max cache size 1 at Apr 14 07:35:32 Apr 14 07:41:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:41:06 server1 postfix/pipe[4594]: ED6005203B7: to=, relay=dovecot, delay=334, delays=334/0.01/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:51:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:51:06 server1 postfix/pipe[4604]: ED6005203B7: to=, relay=dovecot, delay=933, delays=933/0.02/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) mail-dovecot-log (the log I set for debugging): Apr 14 07:28:26 auth: Info: mysql(127.0.0.1): Connected to database postfixadmin Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): query: SELECT password FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: client out: OK 1 [email protected] Apr 14 07:28:26 auth: Debug: master in: REQUEST 1809973249 3356 1 7cfb822db820fc5da67d0776b107cb3f Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): SELECT '/home/vmail/mydomain.com/some.user1' as home, 5000 AS uid, 5000 AS gid FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: master out: USER 1809973249 [email protected] home=/home/vmail/mydomain.com/some.user1 uid=5000 gid=5000 Apr 14 07:28:26 imap-login: Info: Login: user=, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=3360, secured Apr 14 07:28:26 imap([email protected]): Debug: Effective uid=5000, gid=5000, home=/home/vmail/mydomain.com/some.user1 Apr 14 07:28:26 imap([email protected]): Debug: maildir++: root=/home/vmail/mydomain.com/some.user1/Maildir, index=/home/vmail/mydomain.com/some.user1/Maildir/indexes, control=, inbox=/home/vmail/mydomain.com/some.user1/Maildir Apr 14 07:48:31 imap([email protected]): Info: Disconnected: Logged out bytes=85/681 From the output above I'm pretty sure that my problems all stem from (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ), but I have no idea why I'm getting that error. I've have the permissions to that log set just like the other mail logs: root@server1:~# ls -l /var/log/mail* -rw-r----- 1 syslog adm 196653 2012-04-14 07:58 /var/log/mail-dovecot.log -rw-r----- 1 syslog adm 62778 2012-04-13 21:04 /var/log/mail.err -rw-r----- 1 syslog adm 497767 2012-04-14 08:01 /var/log/mail.log Does anyone have any idea what I may be doing wrong? Here are my main.cf and master.cf files: main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = server1.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all # Virtual Configs virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_mailbox_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf relay_domains = mysql:/etc/postfix/mysql_relay_domains.cf smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unauth_destination, reject_unauth_pipelining, reject_invalid_hostname smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous virtual_transport=dovecot dovecot_destination_recipient_limit = 1 master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • "Can't create table" when having to many partitions

    - by Chris
    I am currently having a problem I dont understand. Wherever I look it says mySQL (5.5) / InnoDB doesnt have a table limit. I wanted to test the InnoDB compression and was about to create an empty copy of an existing table and ran into the following problem. this one works: CREATE TABLE `hsc` ( LOTS OF STUFF ) ENGINE=InnoDB CHARSET=utf8 PARTITION BY RANGE (pid) SUBPARTITION BY HASH (cons) SUBPARTITIONS 2 (PARTITION hsc_p0 VALUES LESS THAN (10000) , PARTITION hsc_p1 VALUES LESS THAN (20000) , PARTITION hsc_p2 VALUES LESS THAN (30000) , PARTITION hsc_p3 VALUES LESS THAN (40000) , PARTITION hsc_p4 VALUES LESS THAN (50000) , PARTITION hsc_p40 VALUES LESS THAN (4000000) ); this one doesn't: CREATE TABLE `hsc` ( LOTS OF STUFF ) ENGINE=InnoDB CHARSET=utf8 PARTITION BY RANGE (pid) SUBPARTITION BY HASH (cons) SUBPARTITIONS 2 (PARTITION hsc_p0 VALUES LESS THAN (10000) , PARTITION hsc_p1 VALUES LESS THAN (20000) , PARTITION hsc_p2 VALUES LESS THAN (30000) , PARTITION hsc_p3 VALUES LESS THAN (40000) , PARTITION hsc_p4 VALUES LESS THAN (50000) , PARTITION hsc_p5 VALUES LESS THAN (75000) , PARTITION hsc_p6 VALUES LESS THAN (100000) , PARTITION hsc_p7 VALUES LESS THAN (125000) , PARTITION hsc_p8 VALUES LESS THAN (150000) , PARTITION hsc_p9 VALUES LESS THAN (175000) , PARTITION hsc_p40 VALUES LESS THAN (4000000) ); ERROR 1005 (HY000): Can't create table 'hsc' (errno: 1) Its reproducable by removing the number of partitions and adding them again. it does not have to do anything with the name of the table as i tried various names. there is also enough empty space on the HDD. /dev/simfs 230G 26G 192G 12% /var/lib/mysql.mnt There should be no limit on the partitions http://dev.mysql.com/doc/refman/5.5/en/partitioning-limitations.html Maximum number of partitions. The maximum possible number of partitions for a given table (that does not use the NDB storage engine) is 1024. This number includes subpartitions. i have increased both open_files show variables where variable_name LIKE '%open_files%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | innodb_open_files | 512 | | open_files_limit | 1536 | +-------------------+-------+ No change. Any clues where should I start looking? UPDATE: the whole thing is running in an openvz environment. i saw in users_beancounters that the numflock was a problem, so i increased it. but the problem still persists. maybe this helps: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 515011 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 515011 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 200: kmemsize 9309653 13357056 14372700 14790164 0 lockedpages 0 1008 2048 2048 0 privvmpages 675424 686528 1048576 1572864 0 shmpages 33 673 21504 21504 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 49 90 240 240 0 physpages 243761 246945 0 9223372036854775807 0 vmguarpages 0 0 1048576 1048576 0 oomguarpages 81672 83305 1048576 1048576 0 numtcpsock 6 8 360 360 0 numflock 175 188 512 512 8 numpty 1 9 16 16 0 numsiginfo 0 48 256 256 0 tcpsndbuf 104640 263912 1720320 2703360 0 tcprcvbuf 98304 131072 1720320 2703360 0 othersockbuf 32368 89304 1126080 2097152 0 dgramrcvbuf 0 2312 262144 262144 0 numothersock 19 28 360 360 0 dcachesize 2285052 3624426 3409920 3624960 0 numfile 616 870 9312 9312 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 24 24 128 128 0

    Read the article

  • Good Enough Failover Strategy for DNS / MySQL / Email

    - by IMB
    I've asked and read a lot questions regarding DNS failover but the more I read the more complicated it becomes, some people say it's good enough some say it isn't. No clear answers from what I read. I was wondering if we can set it straight once and for all, at least for the requirements of most websites out there. Right now let's assume the following: We don't need really need load-balancing, what we need is a failover solution. We are running a website based on LAMP on a VPS. We need to make sure that the Web Server, MySQL, Email are always accessible if not 99%. Basically here's my idea and questions about it: Web Server: We need at least one failover server (another VPS on a separate data center). Is DNS Failover via Round Robin good, if not, what's the best? And how do you exactly implement it? How do you make the files you upload/delete on Server A is also on Server B? MySQL: I've only read a brief intro to MySQL replication and I assume that I can replicate Server A to Server B and vice versa on the fly right? So just it case Server A fails and Server B is now running, it will continue to work and replicate to Server A when it becomes available. So in essence Server B is now the primary server, and will later on failover to Server A, should a failure happen again. Email: If we are gonna use DNS Failover, using webmail or relying on emails stored on the server is probably not a good idea right? Since some emails might be on Server A while some might be on Server B? I assume a basic email forwarder to a 3rdparty is good enough (like Gmail for example) to ensure all emails are kept in one place. Here's a basic diagram for a better picture: http://i.stack.imgur.com/KWSIi.png

    Read the article

  • using virtual machine like mySql server

    - by ffmm
    i'm developing a java program and i need a database. Now i'm using MAMP and it's pretty easy but i would have a virtual machine (ubuntu server) and i need to connect my java program with this virtual machine using vitualBox. the situation: I installed VirtualBox on my mac and I installed an ubuntu-server machine set "bridge adapter" in the network settings of VB I installed mysql on ubuntu-server and i created a simple database (all work well by ubuntu) doing ifconfig by ubuntu I get the ip: 192.168.1.217 so in the java program i made this function: public static Connection connect(String host, int port, String dbName, String user, String passwd) { Connection dbConnection = null; try { String dbString = null; Class.forName("com.mysql.jdbc.Driver").newInstance(); dbString = "jdbc:mysql://" + host + ":" + port + "/" + dbName; dbConnection = DriverManager.getConnection(dbString, user, passwd); } catch (Exception e) { System.err.println("Failed to connect with the DB"); e.printStackTrace(); } return dbConnection; } and in the main() i use: Connection con = connect(1, "192.168.1.217", 3306, "Ciao", "root", "cocacola"); 3306 was a default value. I don't know if is correct, it works on mamp, but…. how I can find the correct port that I have to use with VB? when I ran the program I get the catch excepion… what's wrong? ps: i have to install apache o something else?

    Read the article

  • grails run-war connects to mysql but grails run-war doesn't

    - by damian
    Hi, I have a unexpected problem. I had create a war with grails war. Then I had deployed in Tomcat. For my surprise the crud works fine but I don't know what persistence is using. So I did this test: Compare grails prod run-app with grails prod run-war. The first works fine, and does conect with the mysql database. The other don't. This is my DataSource.groovy: dataSource { pooled = true driverClassName = "com.mysql.jdbc.Driver" username = "grails" password = "mysqlgrails" } hibernate { cache.use_second_level_cache=false cache.use_query_cache=false cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } test { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } production { dataSource { dbCreate = "update" url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } } Also I extract the war to see if I could find some data source configuration file without success. More info: Grails version: 1.2.1 JVM version: 1.6.0_17 Also I think this question it's similar, but doesn't have a awnser.

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • Application throws NotSerializableException when run on an jboss cluster

    - by Kalpana
    Environment: JBoss 5.1.0, JBoss Seam 2.2.0 While trying to get my application running in a clustered environment after login I am getting the following exception. Post login we try to store the currentUser in jboss seam session context. java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty How to resolve this? java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1156) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.ArrayList.writeObject(ArrayList.java:570) at sun.reflect.GeneratedMethodAccessor339.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor338.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.serialize (SimpleCachableMarshalledValue.java:271) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.writeExte rnal(SimpleCachableMarshalledValue.java:252) at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java: 1421) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1390) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:460) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallMap(CacheMarshall er200.java:569) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:370) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:191) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:136) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:182) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:52) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:369) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619) 16:38:35,789 ERROR [CommandAwareRpcDispatcher] java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty 16:38:35,789 WARN [/a12] Failed to replicate session YwBL69cG-zdm0m5CvzNj3Q__ java.lang.RuntimeException: Failure to marshal argument(s) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:374) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Python Django MySQLdb setup problem:: setup.py dosen't build due to incorrect location of mysql

    - by 108860375137931889948
    I'm trying to install MySQLdb for python. but when I run the setup, this is the error I get. well I know why its giving all the missing file statements, but dont know where to change the bold marked location from. Please help gaurav-toshniwals-macbook-7:MySQL-python-1.2.3c1 gauravtoshniwal$ python setup.py build running build running build_py copying MySQLdb/release.py - build/lib.macosx-10.3-fat-2.6/MySQLdb running build_ext building '_mysql' extension gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'gamma',1) -D_version_=1.2.3c1 -I/Applications/MAMP/Library/include/mysql -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.3-fat-2.6/_mysql.o _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory _mysql.c:38:19:_mysql.c:39:26: error: mysqld_error.h: No such file or directory error: _mysql.c:40:20:mysql.h: No such file or directory

    Read the article

  • Sharing transactions between web applications, which run in the same cluster

    - by pihentagy
    We (will) have the following architecture: Base.war will be a self-contained spring-hibernate application All applications will run under Glassfish, and may be clustered E1.war will sit on top of Base.war, extending it's functionality There could be further extensions (E2.war, E3.war, …) sitting on top of Base.war Either wars could start a transaction, and transactions could span between wars Without shutting down Base.war, or any other Ex.war, it should be possible to upgrade an Ey.war Is there a solution for this with spring-hibernate-glassfish environment?

    Read the article

  • grails prod run-app connects to mysql but grails prod run-war doesn't

    - by damian
    Hi, I have a unexpected problem. I had create a war with grails war. Then I had deployed in Tomcat. For my surprise the crud works fine but I don't know what persistence is using. So I did this test: Compare grails prod run-app with grails prod run-war. The first works fine, and does conect with the mysql database. The other don't. This is my DataSource.groovy: dataSource { pooled = true driverClassName = "com.mysql.jdbc.Driver" username = "grails" password = "mysqlgrails" } hibernate { cache.use_second_level_cache=false cache.use_query_cache=false cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } test { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } production { dataSource { dbCreate = "update" url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } } Also I extract the war to see if I could find some data source configuration file without success. More info: Grails version: 1.2.1 JVM version: 1.6.0_17 Also I think this question it's similar, but doesn't have a awnser.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >