Search Results

Search found 56528 results on 2262 pages for 'cristobal soto@oracle com'.

Page 915/2262 | < Previous Page | 911 912 913 914 915 916 917 918 919 920 921 922  | Next Page >

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • NTFS-3G is only mounting external drives as read-only

    - by Phanto
    I'm currently running RHEL 5.5, and I installed the ntfs-3g utility from here: http://www.tuxera.com/community/ntfs-3g-download/. I have also followed their instructions for auto-mounting NTFS USB drives here: http://www.tuxera.com/community/ntfs-3g-faq/#plugandplay. The problem I'm experiencing is that ntfs-3g is automatically mounting as root. In order for me to obtain write support, I need to navigate to the mounted device as root, and perform write actions with elevated privileges. Is there a way to mount USB NTFS volumes automatically without needing to sudo every write command? Thanks!

    Read the article

  • Puppet master fails to run under nginx+passenger configuration as rack app, works when run as system service

    - by Anadi Misra
    I get the error [anadi@bangda ~]# tail -f /var/log/nginx/error.log [ pid=19741 thr=23597654217140 file=utils.rb:176 time=2012-09-17 12:52:43.307 ]: *** Exception LoadError in PhusionPassenger::Rack::ApplicationSpawner (no such file to load -- puppet/application/master) (process 19741, thread #<Thread:0x2aec83982368>): from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from config.ru:13 from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 when I start nginx server with passenger module configured, puppet master configured to run through rack. here is the config.ru [anadi@bangda ~]# cat /etc/puppet/rack/config.ru # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: #$:.unshift('/usr/share/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" require 'puppet/application/master' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Application[:master].run and the nginx configuration for puppet master is as follows [anadi@bangda ~]# cat /etc/nginx/conf.d/puppet-master.conf server { listen 8140 ssl; server_name bangda.mycompany.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; access_log /var/log/nginx/puppet/master.access.log; error_log /var/log/nginx/puppet/master.error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangda.mycompany.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangda.mycompany.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } however when I run puppet through the ususal puppetmasterd daemon it works perfect with no errors. I can see somehow the nginx+passenger+rack setup fails to initialize while the same works when running the natvie puppetmaster daemon. Any configuration that I am missing?

    Read the article

  • Varnish / Apache redirecting to backend port 8080

    - by deko
    I'm running Varnish 2 with Apache backend at 8080 on the same machine. Everything is working fine, except one problem: Sometimes Apache(?) is redirecting to backend port :8080 especially when I'm using htaccess. Users are displayed the 8080 port in the URL and Google is crawling my site on the backend port as well, which is not desirable. I want Apache 8080 to be accessible only to Varnish on localhost, and not to redirect or display the backend port. What would be a quick way to prevent users being directed to 8080 and search engines denied crawling the backend? Here is an example htaccess line: redirect /promotion /register.php?promotion=june which causes www.domain.com/promotion to redirect to www.domain.com:8080/register.php?promotion=june

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • eMail with Conflicting Headers not blocked in MS365

    - by John Meredith Langstaff
    On occasion, a company receives eMail with two header fields (“Received” and “From”) containing data that contradict each other drastically. Should they not expect their anti-spam system to flag or block items with contradictions in these fields? For example, they received an eMail which contained [almost exactly] these two headers: Received: from [107.52.51.26] by web315204.mail.ne1.yahoo.com via HTTP; Mon,28 Oct 2013 04:28:04 PDT From: Barry Smith [email protected] Obviously, eMail from an @att.net address isn’t coming from a server on the domain yahoo.com, and Yahoo isn’t forwarding AT&T’s eMail. There were no other headers indicating that the item was sent “OnBehalfOf”, or “Forwarded-by”, or “By_Proxy” or any other such. Should I write a utility to scan incoming eMail for such conflicts, or look more closely at their spam filtering to block this kind of eMail? Their eMail system is Hosted Exchange on MS-365. My central question is, where specifically do I look in MS-365 to get this type of conflicted eMail blocked?

    Read the article

  • MS Exchange -- running code against outbound email

    - by user32680
    I would like to know if using MS Exchange there is a way to run code against outbound emails. The code would need to trigger on emails sent to a specific domain, connect to a database, check for an email related to the email sent, and Carbon-copy that email to the related email. What I'm trying to do: When Jack@example.com gets an email, his auditor George@example.com gets CC'd. Jack is in a MSSQL DB table related to his auditor's email. Are there any samples of things like this being done?

    Read the article

  • Bypass spam check for Auth users in postfix

    - by magiza83
    I would like to know if there is any option to "FILTER" auth users in postfix. Let me explain me better, I have the amavis and dspam services between postfix(25) and postfix(10026) but I would like to avoid this check if the users are authenticated. postfix(25)->policyd(10031)->amavis(10024)->postfix(10025)->dspam(dspam.sock)->postfix(10026)--->cyrus | /|\ |________auth users______________________________________________________________| my conf is: main.cf ... smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_sasl_path = smtpd smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, check_policy_service inet:127.0.0.1:10040, reject_invalid_hostname, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, check_policy_service inet:127.0.0.1:10031, permit_mynetworks, reject ... I would like something like "FILTER smtp:localhost:10026" in case they are authenticated, because in my actual configuration i'm only avoiding policyd, but not amavis and dspam. Thanks.

    Read the article

  • nginx dynamic servername with regular expression doesn't work for co.uk

    - by redn0x
    I'm trying to setup a nginx server which dynamically loads content from a folder for a domain. To do this I'm using regular expressions in the server name like so: server_name ((?<subdomain>.+)\.)?(?<domain>.+)\.(?<tld>.*); This will create a 3 variables for nginx to use later on, for example when using the following url: test.foo.example.com this will evaluate to: $subdomain = test.foo $domain = example $tld = com The problem arises when the co.uk top-level domain is used. In this case when using the url test.foo.example.co.ukit will evaluate to: $subdomain = test.foo.cedira $domain = co $tld = uk How can I edit the regular expression so that it will also work for co.uk?

    Read the article

  • backing up a virtual machine

    - by ErocM
    I inquired with the support of justcloud.com telling them that I have a vmware vm that I was wondering if it could be backed up while in use. I can back up the vm once it is shut down but I was wondering if their "shadow copy" would back it up while running. This was their response: Thank you for your email. I am really very sorry but virtual machines can't be backed up for a simple reason that they are virtual, they have virtual memory, not physical memory. Please let me know if there is anything else I can help with. Kind Regards, Barry James User Experience Team www.justcloud.com These are physical files so I wasn't sure I even understood the response. Am I wrong in thinking that a vm can be backed up while in use? Does this response even make sense? I need a cheap alternative to backing up the vm off the server in case it goes down. Any suggestions?

    Read the article

  • Configuring Apache reverse proxy

    - by Martin
    I have loadbalancer server and edges. I am trying to configure reverse proxy in order to hide the backend servers PL1,2,3. PL 1,2,3 are not located in same subnet. They are located in different locations. PL1 Lb1 -> PL2 PL3 I tried to configure Apache reverse proxy but it is not sending request to PL1,2,3. Reverse proxy worked only when I configured apache to send request to local server on other port. ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /PL1 http://PL1server.com/ ProxyPassReverse /PL1 http://PL1server.com/ The above configuration did not worked. Could you help me to solve the issue. Or is there other proxy types like Squid,Socks5 to solve this issue. Does the reverse proxy fails if we use IP address or domain URL in ProxyPass and ProxyPassReverse ?

    Read the article

  • iTunes Title Bar is Equal to the Location of the Library file?

    - by Urda
    I have never been able to get a clear answer on why the latest edition of iTunes does this. I have my entire iTunes library located in C:\itunes\ and the library data files inside C:\itunes\!library_info for backup purposes. However when version 9 of iTunes came out it went from having iTunes as the title, to !library_info. Anyway to get around this without moving my data files away? Annoying "feature" if that is what it is. Again Apple support and forums were of no help to me. Anyone have insight on this? System Info: Windows Vista x86 Ultimate, latest updates. iTunes Version 9.0.3.15 Screenshot: http://farm5.static.flickr.com/4072/4414557544_d0b25eb64c_o.jpg Flickr Page: http://www.flickr.com/photos/urda/4414557544/

    Read the article

  • 403 Forbidden Error when trying to view localhost on Apache

    - by misbehavens
    I think my Apache must be all screwed up. I don't know if it ever worked. I just upgraded to Snow Leopard, and the first step on this tutorial is to start apache and check that it's working by opening http://localhost. It starts fine but when I go to localhost I get a 403 forbidden error. I don't know where to start figuring out how to fix it, so I wonder if a fresh install of Apache would do the trick. What do you think? Update: I found some error logs in /private/var/log/apache2/. Found this in one of the logs. Not sure what it means: [Tue Nov 10 17:53:08 2009] [notice] caught SIGTERM, shutting down [Tue Nov 10 21:49:17 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist httpd: Could not reliably determine the server's fully qualified domain name, using Andrews-Mac-Pro.local for ServerName mod_bonjour: Skipping user 'andrew' - cannot read index file '/Users/andrew/Sites/index.html'. [Tue Nov 10 21:49:19 2009] [notice] Digest: generating secret for digest authentication ... [Tue Nov 10 21:49:19 2009] [notice] Digest: done [Tue Nov 10 21:49:19 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 configured -- resuming normal operations Update: I also found something in the dummy-host.example.com-error_log file. I didn't set these dummy-host things by the way. Is this the default configuration? [Tue Nov 10 21:59:57 2009] [error] [client ::1] client denied by server configuration: /usr/docs Update: Woohoo! I found the file that had the virtual host definitions. It was in /etc/apache2/extra/httpd-vhosts.conf. It had those two dummy virtual host settings in there. I added a localhost virtual host. Not sure if this is necessary, but since it wasn't working before, decided to do it anyway. After removing the old virtual hosts, adding my new localhost virtual host, and restarting apache, it seems to work. So I guess whenever I want to add a virtual host, I only need to add them to this file? Or is there a hosts file somewhere, like there is on Linux? Update: Yes, there is an /etc/hosts file that need to be changed to. Add the virtual host name to that file.

    Read the article

  • 255 Character limit on VLOOKUP

    - by zod
    Using excel 2003, the formula: =VLOOKUP(D1 ,A1:B135, 2) fails if the length of D1 exceeds 255 characters (i.e. the list has some text longer then 255 characters, D1 has the same text value, and VLOOKUP returns #VALUE!). MATCH seems to suffer from the same character limit. I cannot find any official confirmation of these limits, for example here: http://office.microsoft.com/en-us/excel-help/vlookup-HP005209335.aspx or here: http://office.microsoft.com/en-us/excel-help/excel-specifications-and-limits-HP005199291.aspx?CTT=3 I know that excel has a 255 limit on the length of text used in formulae, but it suggests connate should work (it does not in this case, and I am not using strings in the formula, but referencing another cell). Can somebody confirm that these limit exist (it is always possible I am doing something else wrong)? More importantly, does anyone know of a way around them? Thanks

    Read the article

  • Best practice for making code portable for domains, subdomains or directores

    - by Duopixel
    I recently coded something where it wasn't known if the end code would reside in a subdomain (http://user.domain.com/) or in a subdomain (http://domain.com/user), and I was lost as to the best practice for these unknown scenarios. I could thinks of a couple: Use absolute paths (/css/styles.css) and modrewrite if it ends up being /user Have a settings file and declare a variable with the path (<? php echo $domain . "/css/styles" ?>) Use relative paths (../css/styles.css). What is the best way to handle this?

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. me@example.com will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Single file changed: intrusion or corruption?

    - by Michaël Witrant
    rkhunter reported a single file change on a virtual server (netstat binary). It didn't report any other warning. The change was not the result of a package upgrade (I reinstalled it and the checksum is back as it was before). I'm wondering whether this is a file corruption or an intrusion. I guess an intrusion would have changed many other files watched by rkhunter (or none if the intruder had access to rkhunter's database). I disassembled both binaries with objdump -d and stored the diff here: https://gist.github.com/3972886 The full dump diff generated with objdump -s is here : https://gist.github.com/3972937 I guess a file corruption would have changed either large blocks or single bits, not small blocks like this. Do these changes look suspicious? How could I investigate more? The system is running Debian Squeeze.

    Read the article

  • Gmail POP3 with openssl command line: hangs while RETR-ing

    - by sabya
    I want to use openssl s_client for accessing Gmail POP3S server. I am doing the following: $ openssl s_client -connect pop.gmail.com:995 +OK Gpop ready for requests from <removed: ip> d11pf35377217wam.36 USER <removed: [email protected]> +OK send PASS PASS <removed: password> +OK Welcome. LIST +OK 1 messages (2197 bytes) 1 2197 . STAT +OK 1 2197 RETR 1 RENEGOTIATING The problem is I am never able to execute the RETR command. It always hangs while "RENEGOTIATING". What am I missing?

    Read the article

  • Running PHP 5.2 FastCGI + Apache on CentOS 5 issue

    - by Goran
    I am trying to setup 2 versions of PHP on Centos 5.9 using this tutorial: http://linuxplayer.org/2011/05/intall-multiple-version-of-php-on-one-server. I have followed I have installed default 5.4.19, and I was trying to setup another 5.2.17 PHP version to be run with Fast CGI and I followed the second part completely. However, when I try to run http://web2.example.com it returns 500 error message. In the apache log there are only 2 lines that repeat: [notice] mod_fcgid: call /var/www/web2/index.php with wrapper /usr/local/php52/bin/fcgiwrapper.sh and [notice] mod_fcgid: process /var/www/web2/index.php(25250) exit(server exited), terminated by calling exit(), return code: 255 Please note that I had to add .php at the and of the FCGIWrapper because apache would not start without it: FCGIWrapper /usr/local/php52/bin/fcgiwrapper.sh .php Also please note that http://web1.example.com with PHP 5.4.19 is working absolutely fine. Please help. Thank you very much in advance.

    Read the article

  • Postfix smtp error 450 (failed to add recipient)

    - by culter
    I have debian server with postfix and roundcube. After an attack we are on 2 blacklists, but I don't think that this is the main problem. I can't send mail to any address. I tried to find the cause...I checked var/spool/postfix/etc/resolv.conf and resolv.etc and they're the same with this content: nameserver 127.0.0.1 nameserver localhost In var/log/mail.err I found: cyrus/imap[25452]: DBERROR: opening /var/lib/cyrus/user/m/marcel@mydomain.com.seen: cyrusdb error cyrus/imap[25452]: DBERROR: skiplist recovery /var/lib/cyrus/user/m/marcel@mydomain.com.seen: ADD at 1FC0 exists When I try to send email from roundcube, I get the message from title. When I send it within opera or any other mail client, It gives nothing, but email is'nt sended. Thank you for any advice.

    Read the article

  • how to connect virtual box os and local machine

    - by Nrew
    This question is in connection to this question asked by a user before: http://superuser.com/questions/73470/virtualbox-vdi-file-to-vmware On how to convert vdi to vmdk or vmx using vmware converter. How do I connect the windows xp that is in virtual box to the local computer (windows 7) in a network. Because I got this error while I tried following this instruction: Give the IP address, username and password of the remote machine that you would like to convert and then hit next I got this error in vmware converter: Unable to connect the specified host 10.0.2.15 which is the ip address of the xp machine inside virtual box. It also said that there is a network configuration problem. And when I inputted the ip address from whatismyip.com which should be the same as the ip address on local machine. I didn't get the previous error but I got another one, it said that: insufficient permissions to connect to "ip address" What solution can you suggest for this problem?

    Read the article

  • How to subscribe to a youtube feed from linux command line?

    - by Tim
    I want to subscribe to a youtube channel and automatically download new videos to my linux machine. I know I could do this e.g. with miro, but I will not watch the videos using Miro, want to choose the quality and would like to run it as a cronjob. It should be able to: know which feed entries are new and not download old entries resume (or at least redownload) failed/incomplete downloads from older sessions Are there any complete solutions for this? If not it would be enough for me (maybe even preferable) to just have a command line rss reader that remembers which entries have already been there and writes the new video urls (e.g. http://www.youtube.com/watch?v=FodYFMaI4vQ&feature=youtube_gdata from http://gdata.youtube.com/feeds/api/users/tedxtalks/uploads) into a file. I could then accomplish the rest using a bash script and youtube-dl. What would be programs usable for this purpose?

    Read the article

  • In IIS why do HTTP requests use the host header, and FTP requests do not

    - by Keeno
    So.... In IIS, if you use the in-build FTP you need to combine the FTP host header in the FTP username e.g. www.hello.com|domain/username So, the FTP program gets its "hook" from the username. However, you can connect to the FTP site using www.hello.com:21 over the FTP port. Why then, doesnt the FTP service work the same way as the HTTP service? IIS knows what site to serve back based on the host header after all.... Thanks!

    Read the article

  • php mail not arrives at gmail, not at local server

    - by thomas
    The php mail function I am using does not work completely. It will sent mails to gmail easy enough. However, emails routed directly to my internally hosted exchange server are not getting through. The servers/domains are setup is as follows. URLs are registered with Network solutions (www.independentsservice.com & www.isco.net) NS directs all traffic to our ISP (Socket.net). Socket directs as follows: Mail to our local server FTP to our local server HTTP to our website hosted on Chihost.com Traffic to our local server goes through a Watchguard firewall which routes mail traffic to our locally hosted Exchange server. Is there some reason why exchange won't accept these emails? Thanks!

    Read the article

< Previous Page | 911 912 913 914 915 916 917 918 919 920 921 922  | Next Page >