Search Results

Search found 6425 results on 257 pages for 'master man'.

Page 234/257 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Remove specific definitions from a variable in PHP

    - by Amit
    Hi everyone, I have a PHP mail script that validates name, email address, and phone number before sending the mail. This then means that the Name, Email address, and Phone fields are Required Fields. I want to have it so that the Name and EITHER Email or Phone are required. Such that if a name and phone are inputted, it sends the mail, or if a name and an email are inputted, it also sends the email. The way the script works right now is it has several IF statements that check for (1) name, (2) email and (3) phone. Here's an example of an if statement of the code: if ( ($email == "") ) { $errors .= $emailError; // no email address entered $email_error = true; } if ( !(preg_match($match,$email)) ) { $errors .= $invalidEmailError; // checks validity of email $email_error = true; } And here's how it sends the mail: if ( !($errors) ) { mail ($to, $subject, $message, $headers); echo "<p id='correct'>"; echo "?????? ????? ??????!"; echo "</p>"; } else { if (($email_error == true)) { $errors != $phoneError; /*echo "<p id='errors'>"; echo $errors; echo "</p>";*/ } if (($phone_error == true)) { $errors != $emailError; $errors != $invalidEmailError; /*echo "<p id='errors'>"; echo $errors; echo "</p>";*/ } echo "<p id='errors'>"; echo $errors; echo "</p>"; } This doesn't work though. Basically this is what I want to do: If no email address was entered or if it was entered incorrectly, set a variable called $email_error to be true. Then check for that variable, and if it's true, then remove the $phoneError part of the $errors variable. Man I hope I'm making some sense here. Does anyone know why this doesn't work? It reports all errors if all fields are left empty :( Thanks! Amit

    Read the article

  • NSNotifications vs delegate for multiple instances of same protocol

    - by Brent Traut
    I could use some architectural advice. I've run into the following problem a few times now and I've never found a truly elegant way to solve it. The issue, described at the highest level possible:I have a parent class that would like to act as the delegate for multiple children (all using the same protocol), but when the children call methods on the parent, the parent no longer knows which child is making the call. I would like to use loose coupling (delegates/protocols or notifications) rather than direct calls. I don't need multiple handlers, so notifications seem like they might be overkill. To illustrate the problem, let me try a super-simplified example: I start with a parent view controller (and corresponding view). I create three child views and insert each of them into the parent view. I would like the parent view controller to be notified whenever the user touches one of the children. There are a few options to notify the parent: Define a protocol. The parent implements the protocol and sets itself as the delegate to each of the children. When the user touches a child view, its view controller calls its delegate (the parent). In this case, the parent is notified that a view is touched, but it doesn't know which one. Not good enough. Same as #1, but define the methods in the protocol to also pass some sort of identifier. When the child tells its delegate that it was touched, it also passes a pointer to itself. This way, the parent know exactly which view was touched. It just seems really strange for an object to pass a reference to itself. Use NSNotifications. The parent defines a separate method for each of the three children and then subscribes to the "viewWasTouched" notification for each of the three children as the notification sender. The children don't need to attach themselves to the user dictionary, but they do need to send the notification with a pointer to themselves as the scope. Same as #4, but rather than using separate methods, the parent could just use one with a switch case or other branching along with the notification's sender to determine which path to take. Create multiple man-in-the-middle classes that act as the delegates to the child views and then call methods on the parent either with a pointer to the child or with some other differentiating factor. This approach doesn't seem scalable. Are any of these approaches considered best practice? I can't say for sure, but it feels like I'm missing something more obvious/elegant.

    Read the article

  • How to get all captures of subgroup matches with preg_match_all()?

    - by hakre
    Update/Note: I think what I'm probably looking for is to get the captures of a group in PHP. Referenced: PCRE regular expressions using named pattern subroutines. (Read carefully:) I have a string that contains a variable number of segments (simplified): $subject = 'AA BB DD '; // could be 'AA BB DD CC EE ' as well I would like now to match the segments and return them via the matches array: $pattern = '/^(([a-z]+) )+$/i'; $result = preg_match_all($pattern, $subject, $matches); This will only return the last match for the capture group 2: DD. Is there a way that I can retrieve all subpattern captures (AA, BB, DD) with one regex execution? Isn't preg_match_all suitable for this? This question is a generalization. Both the $subject and $pattern are simplified. Naturally with such the general list of AA, BB, .. is much more easy to extract with other functions (e.g. explode) or with a variation of the $pattern. But I'm specifically asking how to return all of the subgroup matches with the preg_...-family of functions. For a real life case imagine you have multiple (nested) level of a variant amount of subpattern matches. Example This is an example in pseudo code to describe a bit of the background. Imagine the following: Regular definitions of tokens: CHARS := [a-z]+ PUNCT := [.,!?] WS := [ ] $subject get's tokenized based on these. The tokenization is stored inside an array of tokens (type, offset, ...). That array is then transformed into a string, containing one character per token: CHARS -> "c" PUNCT -> "p" WS -> "s" So that it's now possible to run regular expressions based on tokens (and not character classes etc.) on the token stream string index. E.g. regex: (cs)?cp to express one or more group of chars followed by a punctuation. As I now can express self-defined tokens as regex, the next step was to build the grammar. This is only an example, this is sort of ABNF style: words = word | (word space)+ word word = CHARS+ space = WS punctuation = PUNCT If I now compile the grammar for words into a (token) regex I would like to have naturally all subgroup matches of each word. words = (CHARS+) | ( (CHARS+) WS )+ (CHARS+) # words resolved to tokens words = (c+)|((c+)s)+c+ # words resolved to regex I could code until this point. Then I ran into the problem that the sub-group matches did only contain their last match. So I have the option to either create an automata for the grammar on my own (which I would like to prevent to keep the grammar expressions generic) or to somewhat make preg_match working for me somehow so I can spare that. That's basically all. Probably now it's understandable why I simplified the question. Related: pcrepattern man page Get repeated matches with preg_match_all()

    Read the article

  • Can't get MySQL to install

    - by James Marthenal
    I'd like to think I know what I'm doing in a Unix shell but maybe not. I made a mistake in a configuration file for MySQL, so I decided to just uninstall it and then reinstall it, so I did: sudo apt-get --purge remove mysql-server mysql-server-5.0 mysql-client The files were deleted, so I then tried to install it, but it didn't ask me for a root password or anything else, so I uninstalled it using the above command again and then did sudo rm -rf /etc/mysql sudo rm /etc/init.d/mysql sudo rm -rf /var/lib/mysql* I then restarted the computer then installed it again: sudo apt-get install mysql-server mysql-client It asked for a root password, and everything looked like it would work, until I saw this: $ sudo apt-get install mysql-server mysql-client Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 Suggested packages: tinyca The following NEW packages will be installed: mysql-client mysql-server mysql-server-5.0 0 upgraded, 3 newly installed, 0 to remove and 1 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.7MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-client mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.28101": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.28101 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 160284 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-client. Unpacking mysql-client (from .../mysql-client_5.0.51a-24+lenny5_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up mysql-client (5.0.51a-24+lenny5) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Now I can't seem to figure out what to do. I just want to get a clean MySQL installation at this point. I'm running the latest stable release of Debian. All help is appreciated—thanks! Edit: I looked at this similar question, which suggests that I uninstall mysql-common, but when I try to do so I see: The following packages will be REMOVED: apache2 apache2-mpm-prefork apache2-utils apache2.2-common git-svn libapache2-mod-php5 libapache2-mod-python libapache2-svn libaprutil1 libdbd-mysql-perl libdbd-mysql-rubygem libmysql-ruby libmysql-ruby1.8 libmysql-rubygem libmysqlclient15-dev libmysqlclient15off librdf-perl librdf0 libserf-0-0 libsvn-perl libsvn1 mysql-client-5.0 mysql-common mytop ndn-apache22-php5 ndn-apache22-svn ndn-interpreters ndn-lighttpd ndn-netsaint-plugins ndn-perl-modules ndn-php5-cgi ndn-php5-xcache ndn-php53 ndn-php53-suhosin ndn-rubygems php5 php5-mcrypt php5-mysql proftpd proftpd-mod-mysql python-django python-mysqldb python-subversion python-svn subversion subversion-tools trac zendoptimizer 0 upgraded, 0 newly installed, 48 to remove and 1 not upgraded. Eeek! Any suggestions?

    Read the article

  • OpenSSL: certificate signature failure error

    - by e-t172
    I'm trying to wget La Banque Postale's website. $ wget https://www.labanquepostale.fr/ --2009-10-08 17:25:03-- https://www.labanquepostale.fr/ Resolving www.labanquepostale.fr... 81.252.54.6 Connecting to www.labanquepostale.fr|81.252.54.6|:443... connected. ERROR: cannot verify www.labanquepostale.fr's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': certificate signature failure To connect to www.labanquepostale.fr insecurely, use `--no-check-certificate'. Unable to establish SSL connection. I'm using Debian Sid. On another machine which is running Debian Sid with same software versions the command works perfectly. ca-certificates is installed on both machines (I tried removing it and reinstalling it in case a certificate got corrupted somehow, no luck). Opening https://www.labanquepostale.fr/ in Iceweasel on the same machine works perfectly. Additional information: $ openssl s_client -CApath /etc/ssl/certs -connect www.labanquepostale.fr:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify error:num=7:certificate signature failure verify return:0 --- Certificate chain 0 s:/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645/C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA 1 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 2 s:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority 3 s:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- <base64-encoded certificate removed for lisibility> -----END CERTIFICATE----- subject=/1.3.6.1.4.1.311.60.2.1.3=FR/2.5.4.15=V1.0, Clause 5.(b)/serialNumber=421100645 /C=FR/postalCode=75006/ST=PARIS/L=PARIS/streetAddress=115 RUE DE SEVRES/O=LA BANQUE POSTALE/OU=DISF2/CN=www.labanquepostale.fr issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA --- No client certificate CA names sent --- SSL handshake has read 5101 bytes and written 300 bytes --- New, TLSv1/SSLv3, Cipher is RC4-MD5 Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-MD5 Session-ID: 0009008CB3ADA9A37CE45B464E989C82AD0793D7585858584ACE056700035363 Session-ID-ctx: Master-Key: 1FB7DAD98B6738BEA7A3B8791B9645334F9C760837D95E3403C108058A3A477683AE74D603152F6E4BFEB6ACA48BC2C3 Key-Arg : None Start Time: 1255015783 Timeout : 300 (sec) Verify return code: 7 (certificate signature failure) --- Any idea why I get certificate signature failure? As if this wasn't strange enough, copy-pasting the "server certificate" mentionned in the output and running openssl verify on it returns OK...

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1)

    Read the article

  • Uwsgi starts from root but not as a service

    - by vittore
    I have nginx + uwsgi setup for flask website. thats my nginx server { listen 80; server_name _; location /static/ { alias /var/www/site/app/static/; } location / { uwsgi_pass 127.0.0.1:5080; include uwsgi_params; } } And here is my uwsgi config.xml <uwsgi> <socket>127.0.0.1:5080</socket> <autoload/> <daemonize>/var/log/uwsgi_webapp.log</daemonize> <pythonpath>/var/www/site/</pythonpath> <module>run:app</module> <plugins>python27</plugins> <virtualenv>/var/www/venv/</virtualenv> <processes>1</processes> <enable-threads/> <master /> <harakiri>60</harakiri> <max-requests>2000</max-requests> <limit-as>512</limit-as> <reload-on-as>256</reload-on-as> <reload-on-rss>192</reload-on-rss> <no-orphans/> <vacuum/> </uwsgi> When I trying to start uwsgi service (service uwsgi start) it says ok but there is no uwsgi process and I see the following in the log: *** Starting uWSGI 1.0.3-debian (64bit) on [Fri Oct 25 00:43:13 2013] *** compiled with version: 4.6.3 on 17 July 2012 02:26:54 current working directory: / writing pidfile to /run/uwsgi/app/gsk/pid detected binary path: /usr/bin/uwsgi-core setgid() to 33 setuid() to 33 limiting address space of processes... your process address space limit is 536870912 bytes (512 MB) your memory page size is 4096 bytes *** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers *** uwsgi socket 0 bound to TCP address 127.0.0.1:5080 fd 6 bind(): Permission denied [socket.c line 107] However when I start uwsgi as a root uwsgi --socket 127.0.0.1:5080 --module run --callab app --harakiri 15 --harakiri-verbose --logto2 tmp/uwsgi.log It starts just fine and after restarting nginx I can access website. What can be an issue ?

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • Can't install MySQL

    - by James Marthenal
    I have a Debian machine that I have previously installed MySQL on. In an attempt to delete it, I stupidly deleted the directories/files /etc/mysql/, /etc/init.d/mysql, /usr/lib/mysql/, /var/lib/mysql/. I then later did sudo apt-get purge mysql-server mysql-server-5.0. Now, when I try to install mysql-server, I get: $ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: mysql-server-5.0 The following NEW packages will be installed: mysql-server mysql-server-5.0 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/27.4MB of archives. After this operation, 86.6MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! mysql-server-5.0 mysql-server Authentication warning overridden. Preconfiguring packages ... Can't exec "/tmp/mysql-server-5.0.config.122781": Permission denied at /usr/share/perl/5.10/IPC/Open3.pm line 168. open2: exec of /tmp/mysql-server-5.0.config.122781 configure failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 mysql-server-5.0 failed to preconfigure, with exit status 255 Selecting previously deselected package mysql-server-5.0. (Reading database ... 158138 files and directories currently installed.) Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.0.51a-24+lenny5_amd64.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.0.51a-24+lenny5_all.deb) ... Processing triggers for man-db ... Setting up mysql-server-5.0 (5.0.51a-24+lenny5) ... Stopping MySQL database server: mysqld. 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/user.frm' (errno: 13) ERROR: 1017 Can't find file: './mysql/user.frm' (errno: 13) 110206 19:31:13 [ERROR] Aborting 110206 19:31:13 [Note] /usr/sbin/mysqld: Shutdown complete /etc/init.d/mysql: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz (warning). Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried to search for a solution via Google and have found lots of suggestions for this problem, but ultimately it seems like the problem is that by deleting the files manually, I messed up the mysql-common package. I have tried to do sudo apt-get install --reinstall mysql-common followed by installing mysql-server, but it does the exact same thing. I previously had MySQL working great, I just want to get it back to that state. Thanks so much for your help.

    Read the article

  • sudo in Debian squeeze inside linux-vserver always wants password

    - by mark
    Every since I upgraded all my linux-vserver Debian guests from Lenny to Squeeze I've the apparent problem that whenever I want to use sudo it asks me for my password. Every time. I've configured sudo to have a timeout of 30 minutes: Defaults timestamp_timeout=30 . This has been configured when it was still Lenny (note: as suggested by EightBitTony I've also tried without this setting - no change). I've a hard time figuring out what the problem here is, since I think my configuration is right. I thought about it being a problem with the file used to record the timestamp, maybe a permission issue, but was unlucky to find any hard evidence. I've compared the contents of /var/lib/sudo/ between a working and a non-working system but couldn't spot any difference. The version of sudo used in both environments is 1.7.4p4-2.squeeze.3. My non-working system(s): find /var/lib/sudo/ -ls 17319289 4 drwx------ 4 root root 4096 Jan 1 1985 /var/lib/sudo/ 17319286 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 17319312 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 17319361 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 17319490 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 17319326 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/4 17319491 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/2 A working system: find /var/lib/sudo -ls 2598921 4 drwx------ 5 root root 4096 Jan 1 1985 /var/lib/sudo 1999522 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 2000781 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/8 1998998 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/17 1999459 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/26 1998930 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/24 2000771 4 -rw------- 1 root mark 40 Jun 25 11:39 /var/lib/sudo/mark/4 2000773 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/5 1999223 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/0 1998908 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/14 2000769 4 -rw------- 1 root mark 40 Jul 9 13:30 /var/lib/sudo/mark/2 2000770 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/3 2000782 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 2000778 4 -rw------- 1 root mark 40 Jul 8 00:11 /var/lib/sudo/mark/7 1998892 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/19 1999264 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/23 2000789 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/12 1999093 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/25 1998880 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/18 1998853 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/20 2000790 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/15 1998878 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/16 1998874 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/13 2000774 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 2000786 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/11 1998893 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/22 2000783 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 1998949 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/1 Despite the obvious (some up2date timestamps on the working system) I don't see anything wrong here, so it could be as well be a wrong track. Here's my current /etc/sudoers: # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification User_Alias FULLADMIN = user1, user2, user3 # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL FULLADMIN ALL = (ALL) ALL # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d #Defaults always_set_home,timestamp_timeout=30

    Read the article

  • dovecot imap ssl certificate issues

    - by mulllhausen
    i have been trying to configure my dovecot imap server (version 1.0.10 - upgrading is not an option at this stage) with a new ssl certificate on ubuntu like so: $ grep ^ssl /etc/dovecot/dovecot.conf ssl_disable = no ssl_cert_file = /etc/ssl/certs/mydomain.com.crt.20120904 ssl_key_file = /etc/ssl/private/mydomain.com.key.20120904 $ /etc/init.t/dovecot stop $ sudo dovecot -p $ [i enter the ssl password here] it doesn't show any errors and when i run ps aux | grep dovecot i get root 21368 0.0 0.0 12452 688 ? Ss 15:19 0:00 dovecot -p root 21369 0.0 0.0 71772 2940 ? S 15:19 0:00 dovecot-auth dovecot 21370 0.0 0.0 14140 1904 ? S 15:19 0:00 pop3-login dovecot 21371 0.0 0.0 14140 1900 ? S 15:19 0:00 pop3-login dovecot 21372 0.0 0.0 14140 1904 ? S 15:19 0:00 pop3-login dovecot 21381 0.0 0.0 14280 2140 ? S 15:19 0:00 imap-login dovecot 21497 0.0 0.0 14280 2116 ? S 15:29 0:00 imap-login dovecot 21791 0.0 0.0 14148 1908 ? S 15:48 0:00 imap-login dovecot 21835 0.0 0.0 14148 1908 ? S 15:53 0:00 imap-login dovecot 21931 0.0 0.0 14148 1904 ? S 16:00 0:00 imap-login me 21953 0.0 0.0 5168 944 pts/0 S+ 16:02 0:00 grep --color=auto dovecot which looks like it is all running fine. so then i test to see if i can telnet to the dovecot server, and this works fine: $ telnet localhost 143 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. * OK Dovecot ready. but when i test whether dovecot has configured the ssl certificates properly, it appears to fail: $ sudo openssl s_client -connect localhost:143 -starttls imap CONNECTED(00000003) depth=0 /description=xxxxxxxxxxxxxxxxx/C=AU/ST=xxxxxxxx/L=xxxx/O=xxxxxx/CN=*.mydomain.com/[email protected] verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /description=xxxxxxxxxxx/C=AU/ST=xxxxxx/L=xxxx/O=xxxx/CN=*.mydomain.com/[email protected] verify error:num=27:certificate not trusted verify return:1 depth=0 /description=xxxxxxxx/C=AU/ST=xxxxxxxxxx/L=xxxx/O=xxxxx/CN=*.mydomain.com/[email protected] verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/description=xxxxxxxxxxxx/C=AU/ST=xxxxxxxxxx/L=xxxxxxxx/O=xxxxxxx/CN=*.mydomain.com/[email protected] i:/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 2 Primary Intermediate Server CA --- Server certificate -----BEGIN CERTIFICATE----- xxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxx . . . xxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxx== -----END CERTIFICATE----- subject=/description=xxxxxxxxxx/C=AU/ST=xxxxxxxxx/L=xxxxxxx/O=xxxxxx/CN=*.mydomain.com/[email protected] issuer=/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 2 Primary Intermediate Server CA --- No client certificate CA names sent --- SSL handshake has read 2831 bytes and written 342 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: xxxxxxxxxxxxxxxxxxxx Session-ID-ctx: Master-Key: xxxxxxxxxxxxxxxxxx Key-Arg : None Start Time: 1351661960 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) --- . OK Capability completed. at least, i'm assuming this is a failure???

    Read the article

  • Rsync: how to mount truecrypt on-the-fly on the receiving side?

    - by deepc
    The short version: how can I keep an rsync backup on a truecrypt volume? The hard part is to mount/unmount this volume on the fly when it is needed for rsync. Details This is my current backup configuration (which works fairly well for the most part): backup source is on Win7 64 bit, destination is a remote Linux box (Debian) actual data transfer is done by rsync via ssh (cwRsync with cygwin) rsync daemon is started on demand via ssh On the Linux box the backup is protected by file permissions only. I want to increase security here and put the backup into a truecrypt volume. I can fuse-mount that volume manually in the shell. The question is now how can I make rsync not only open an ssh connection and starting the rsync daemon, but also to mount the truecrypt volume before (and unmount it after)? My money is on option --rsync-path which can be used to pass a command line to ssh - provided that stdin and stdout still work the same. I guess that command would have to be a shell script. Is this possible, and what would the script look like? For reference, here's a quote of that option: --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ This is the full rsync man page. Truecrypt volume auto-mount Solved! Turns out this option is actually key to auto-mounting the truecrypt volume on the remote side. The following command line does the trick (one line!): rsync $options -e "ssh -p $port -i ../.ssh/id_dsa" --rsync-path="/usr/local/bin/truecrypt -d && /usr/local/bin/truecrypt --fs-options=rw,sync,utf8,uid=$UID,umask=0007 --non-interactive -p $password $pathToVolume $remoteMountDir && rsync" $localSourceDir $user:$remoteMountMountDir Truecrypt volume auto-dismount Still open: how can I unmount the volume when rsync is done? Not sure if the following makes sense to anyone but I give it a try... Right now I am unmounting (truecrypt -d), then mounting again, then continuing with rsync. At this time rsync needs to do its thing but I dont know when its done. Adding ... rsync && truecrypt -d to the command line does not work because then the rsync daemon does not start. This is because rsync starts the daemon with parameter --server on the remote side and that parameter would go to the final truecrypt -d.

    Read the article

  • Disk is spinning down each minute, unable to disable it

    - by lzap
    I played with spindown and APM settings of my Samsung discs and now they spin down every minute. I want to disable it, but it seems it does not accept any of the spindown time or APM values. Nothing works, it's all the same. Please help what values should be proper for it. I do not want it to spin down at all. /dev/sda: ATA device, with non-removable media Model Number: SAMSUNG HD154UI Serial Number: S1Y6J1KZ206527 Firmware Revision: 1AG01118 Standards: Used: ATA-8-ACS revision 3b Supported: 7 6 5 4 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 2930277168 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 1430799 MBytes device size with M = 1000*1000: 1500301 MBytes (1500 GB) cache/buffer size = unknown Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 udma7 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE * Advanced Power Management feature set Power-Up In Standby feature set * SET_FEATURES required to spinup after power up SET_MAX security extension Automatic Acoustic Management feature set * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test Media Card Pass-Through * General Purpose Logging feature set * 64-bit World wide name * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters * NCQ priority information DMA Setup Auto-Activate optimization Device-initiated interface power management * Software settings preservation * SMART Command Transport (SCT) feature set * SCT Long Sector Access (AC1) * SCT LBA Segment Access (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) Security: Master password revision code = 65534 supported not enabled not locked frozen not expired: security count supported: enhanced erase 326min for SECURITY ERASE UNIT. 326min for ENHANCED SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 50024e900300cca3 NAA : 5 IEEE OUI : 0024e9 Unique ID : 00300cca3 Checksum: correct I have the very same disc which I did not "tuned" and it does not spin. But I do not know where to read the settings from. The hdparm only shows this: Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 Edit: It seems the issue was tuned daemon in RHEL6. It was too aggressive, I turned off disc tuning and it seems they are no longer spinning down.

    Read the article

  • Postfix/SMTPD Relay Access Denied when sending outside the network

    - by David
    I asked a very similar question some 4 or 5 months ago, but haven't tracked down a suitable answer. I decided to post a new question so that I can ... a) Post updated info b) post my most current postconf -n output When a user sends mail from inside the network (via webmail) to email addresses both inside and outside the network, the email is delivered. When a user with an email account on the system sends mail from outside the network, using the server as the relay, to addresses inside the network, the email is delivered. But [sometimes] when a user connects via SMTPD to send email to an external address, a Relay Access Denied error is returned: Feb 25 19:33:49 myers postfix/smtpd[8044]: NOQUEUE: reject: RCPT from host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182]: 554 5.7.1 <host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182]>: Client host rejected: Access denied; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<my-computer-name> Feb 25 19:33:52 myers postfix/smtpd[8044]: disconnect from host-68-169-158-182.WISOLT2.epbfi.com[68.169.158.182] Sending this through Microsoft Outlook 2003 generates the above log. However, sending through my iPhone, with the exact same settings, goes through fine: Feb 25 19:37:18 myers postfix/qmgr[3619]: A2D861302C9: from=<[email protected]>, size=1382, nrcpt=1 (queue active) Feb 25 19:37:18 myers amavis[2799]: (02799-09) FWD via SMTP: <[email protected]> -> <[email protected]>,BODY=7BIT 250 2.0.0 Ok, id=02799-09, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as A2D861302C9 Feb 25 19:37:18 myers amavis[2799]: (02799-09) Passed CLEAN, [68.169.158.182] [68.169.158.182] <[email protected]> -> <[email protected]>, Message-ID: <[email protected]>, mail_id: yMLvzVQJloFV, Hits: -9.607, size: 897, queued_as: A2D861302C9, 6283 ms Feb 25 19:37:18 myers postfix/lmtp[8752]: 2ED3A1302C8: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=6.6, delays=0.25/0.01/0.19/6.1, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=02799-09, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as A2D861302C9) Feb 25 19:37:18 myers postfix/qmgr[3619]: 2ED3A1302C8: removed Outgoing Settings on Outlook 2003 match the settings on my iPhone: SMTP server: mail.my-domain.com Username: My full email address Uses SSL Server Port 587 Now, here's postconf -n. I realize the "My Networks" Parameter is a bit nasty. I have these IP addresses in here for just this reason, as others have been complaining of this problem too: alias_database = hash:/etc/postfix/aliases alias_maps = $alias_database append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix content_filter = amavisfeed:[127.0.0.1]:10024 daemon_directory = /usr/libexec/postfix debug_peer_level = 2 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mail_spool_directory = /var/spool/mail mailbox_size_limit = 0 mailq_path = /usr/bin/mailq manpage_directory = /usr/share/man message_size_limit = 20480000 mydestination = $myhostname, localhost, localhost.$mydomain mydomain = my-domain.com myhostname = myers.my-domain.com mynetworks = 127.0.0.0/8, 74.125.113.27, 74.125.82.49, 74.125.79.27, 209.85.161.0/24, 209.85.214.0/24, 209.85.216.0/24, 209.85.212.0/24, 209.85.160.0/24 myorigin = $myhostname newaliases_path = /usr/bin/newaliases queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES receive_override_options = no_address_mappings recipient_delimiter = + relay_domains = $mydestination sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail setgid_group = postdrop smtp_bind_address = my-primary-server's IP address smtpd_banner = mail.my-domain.com smtpd_helo_required = yes smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/mailserver/postfix.pem smtpd_tls_key_file = /etc/ssl/mailserver/private/postfix.pem smtpd_tls_loglevel = 3 smtpd_tls_received_header = no smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 554 virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf,mysql:/etc/postfix/mysql-email2email.cf virtual_gid_maps = static:5000 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_minimum_uid = 5000 virtual_transport = dovecot virtual_uid_maps = static:5000 If anyone has any ideas and can help me finally solve this issue once and for all, I'd be eternally grateful.

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • Can't ssh tunnel to access a remote mysql server

    - by hobbes3
    I can't seem to figure out why I can't use ssh tunnel to connect to my remote MySQL server. I do ssh tunnel with [hobbes3@hobbes3] ~ $ ssh linode -L 3307:localhost:3306 Then on another terminal, I try [hobbes3@hobbes3] ~ $ mysql -h localhost -P 3307 -u root --protocol=tcp -p Enter password: ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2 On the server, it shows this: root@li534-120 ~ # channel 4: open failed: connect failed: Connection refused Here is my my.cnf on the server: [mysqld] # Settings user and group are ignored when systemd is used (fedora >= 15). # If you need to run mysqld under different user or group, # customize your systemd unit file for mysqld according to the # instructions in http://fedoraproject.org/wiki/Systemd user=mysql datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Semisynchronous Replication # http://dev.mysql.com/doc/refman/5.5/en/replication-semisync.html # uncomment next line on MASTER ;plugin-load=rpl_semi_sync_master=semisync_master.so # uncomment next line on SLAVE ;plugin-load=rpl_semi_sync_slave=semisync_slave.so # Others options for Semisynchronous Replication ;rpl_semi_sync_master_enabled=1 ;rpl_semi_sync_master_timeout=10 ;rpl_semi_sync_slave_enabled=1 # http://dev.mysql.com/doc/refman/5.5/en/performance-schema.html ;performance_schema [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [mysqld] port = 3306 socket=/var/lib/mysql/mysql.sock skip-external-locking key_buffer_size = 64M max_allowed_packet = 128M sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M thread_cache = 8 max_connections = 25 query_cache_size = 16M table_open_cache = 1024 table_definition_cache = 1024 tmp_table_size = 32M max_heap_table_size = 32M bind-address = 0.0.0.0 Now sure if this helps but here is the MySQL user list: mysql> select * from mysql.user; +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | Host | User | Password | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin | authentication_string | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ | localhost | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | 127.0.0.1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | | ::1 | root | *664328D3C5E263F4FB25185681AAE7E92B01B2B0 | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | | | | | 0 | 0 | 0 | 0 | | | +-----------+------+-------------------------------------------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+--------+-----------------------+ 3 rows in set (0.00 sec) I read about how MySQL treats localhost vs 127.0.0.1 as connecting via a socket or TCP, respectively. But I'm starting to get confused on what's really going on or if socket vs TCP is even the issue. Thanks in advance and I'm open for any tips and suggestions! Some more info: My MySQL client, running OS X 10.8.4, is mysql Ver 14.14 Distrib 5.6.10, for osx10.8 (x86_64) using EditLine wrapper My MySQL server, running on CentOS 6.4 32-bit, is mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+--------------------------------------+ | Variable_name | Value | +-------------------------+--------------------------------------+ | innodb_version | 1.1.8 | | protocol_version | 10 | | slave_type_conversions | | | version | 5.5.28 | | version_comment | MySQL Community Server (GPL) by Remi | | version_compile_machine | i686 | | version_compile_os | Linux | +-------------------------+--------------------------------------+ 7 rows in set (0.00 sec)

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1) EDIT Not sure whether it was a bug or not, but it seems to be fixed now in my current version of ffmpeg, at least for this video (version 0.6.1 from MacPorts).

    Read the article

  • Ububtu server 12.04 auto installation freezes at kickseeding running if ks.cfg has post scripts

    - by john206
    I'm trying to make a custom Ubuntu Server iso file. Kickstart file (ks.cfg) runs smooth when there is no %post in the file and Ubuntu installs correctly with ks configuration. Installation finishes installing base, apt, grub and It echos: Kickseed Running... and it freezes @ 0% I thought may be apt-get update doesnt work in ks file, I tried to install other apps like apache2 but no luck I have created dozen iso images and installed them in Virtual Box.I have been googling for 3 days and checked out ubuntu forums but haven't figured out the issue. I appreciate your help. This is how I made the iso image. My ks.file and txt.cfg files located in isolinux directory: root@ubuntu:/home/work mount -o loop ubuntu-12.04-amd64.iso original-iso/ rsync -a original-iso/ custom-iso/ cp ks.cfg custom-iso/isolinux/ cp txt.cfg custom-iso/isolinux/ chmod -R 777 custom-iso/ #Creating Iso image mkisofs -D -r -V “$IMAGE_NAME” -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ~/ubuntu-12.04-alternate-custom-amd64.iso custom-iso/ ks.cfg #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone America/Los_Angeles #Root password rootpw --iscrypted somethingsomething #Initial user user ubuntu --fullname "ubuntu" --iscrypted --password somethingsomething. #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use CDROM installation media cdrom #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --size 128 --fstype=ext3 --asprimary part / --size 512 --fstype=ext3 --asprimary part swap --size 512 part /tmp --size 512 --fstype=ext3 part /var --size 512 --fstype=ext3 part /usr --size 4096 --fstype=ext3 part /home --size 2048 --fstype=ext3 #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --disabled --http --ftp --ssh #X Window System configuration information xconfig --depth=32 --resolution=1024x768 --defaultdesktop=GNOME %post apt-get update mkdir /home/user txt.cfg default autoinstall label autoinstall menu label ^Install Custom Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz quiet ks=cdrom:/isolinux/ks.cfg -- label install menu label ^Install Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet -- label cloud menu label ^Multiple server install with MAAS kernel /install/vmlinuz append modules=maas-enlist-udeb vga=788 initrd=/install/initrd.gz quiet -- label check menu label ^Check disc for defects kernel /install/vmlinuz append MENU=/bin/cdrom-checker-menu vga=788 initrd=/install/initrd.gz quiet -- label memtest menu label Test ^memory kernel /install/mt86plus label hd menu label ^Boot from first hard disk localboot 0x80

    Read the article

  • Properly Configured Rsyslog on CentOS

    - by Gaia
    I'm trying to configure Rsyslog 5.8.10 on CentOS 6.4 to send Apache's error and access logs to a remote server. It's working, but I have a couple questions. A) I would like to use as few queues (and resources) as possible. I send error logs to server A, send access logs to server A, send both logs in one stream to server B. Should I specify one queue per external service (2 queues) or one queue per stream (3 queues, as I have now)? This is what I have: $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName logglyaccessqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 if $syslogtag startswith 'www-access' then @@logs-01.loggly.com:514;logglyaccess $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName logglyerrorsqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 if $syslogtag startswith 'www-errors' then @@logs-01.loggly.com:514;logglyerrors $DefaultNetstreamDriverCAFile /etc/syslog.papertrail.crt # trust these CAs $ActionSendStreamDriver gtls # use gtls netstream driver $ActionSendStreamDriverMode 1 # require TLS $ActionSendStreamDriverAuthMode x509/name # authenticate by hostname $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName papertrailqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 *.* @@logs.papertrailapp.com:XXXXX;papertrailstandard & ~ B) Does a queue block get used over and over by every send action that follows it or only by the first one or only until it encounters a send followed by a discard action (~)? C) How do I reset a queue block so that an upcoming send action does not use a queue at all? D) Does a TLS block get used over and over by every send action that follows it or only by the first one or only until it encounters a send followed by a discard action (~)? E) How do I reset a TLS block so that an upcoming send action does not use TLS at all? F) If I run rsyslog -N1 I get: rsyslogd -N1 rsyslogd: version 5.8.10, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: WARNING: rsyslogd is running in compatibility mode. Automatically generated config directives may interfer with your rsyslog.conf settings. We suggest upgrading your config and adding -c5 as the first rsyslogd option. rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad immark rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: MarkMessagePeriod 1200 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imuxsock rsyslogd: End of config validation run. Bye. Where do I place the -c5 so that it doesnt run in compatibility mode anymore?

    Read the article

  • Building nginx 1.0.4 on Amazon EC2 micro - perl and python problems

    - by digitaltoast
    I'd like to run nginx as a reverse proxy with apache2 on my EC2 micro instance. yum install nginx gives me nginx-0.8.53-1.2.amzn1.x86_64.rpm The current nginx is 1.0.4 I found and followed this guide: http://kdn2.info/2011/05/install-nginx-on-amazon-ec2/ It works fine up to and including "make". When I get to checkinstall --fstrans=no I get ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. make[1]: Leaving directory `/root/src/nginx-1.0.4' ======================== Installation successful ========================== Copying documentation directory... ./ ./CHANGES ./LICENSE ./README cp: cannot stat `//var/tmp/gRWoVgIcdbmjfTjoVGBM/newfiles.tmp': No such file or directory Copying files to the temporary directory...OK Striping ELF binaries and libraries...OK Compressing man pages...OK Building file list...OK Building RPM package... FAILED! *** Failed to build the package ...and the logfile is full of: Building target platforms: x86_64 Building for target x86_64 Processing files: nginx-1.0.4-1.x86_64 error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr/doc There IS /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/ but no /usr Following further down the page, it says: "If we want to use, for example, PHP 5.2 we can download PHP and Nginx compatible with Amazon Kernel(Xen Kernel) from the CentosALT Repository." So I install the two repositories, but when I yum install http://centos.alt.ru/pub/nginx/1.0/RPMS/x86_64/nginx-stable-1.0.4-1.el5.x86_64.rpm I get Error: Package: nginx-stable-1.0.4-1.el5.x86_64 (/nginx-stable-1.0.4-1.el5.x86_64) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem but that doesn't fix it. When I do yum update, I get --> Finished Dependency Resolution Error: Package: python-distribute-0.6.19-10.1.x86_64 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 Error: Package: python-distribute-0.6.19-10.1.i586 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 I've tried everything - yum clean all and various other suggestions found on other sites. If anyone has any suggestions or a known package of the current 1.04 nginx working on EC2 Micro (Linux ip-10-56-63-85 2.6.35.11-83.9.amzn1.x86_64 #1 SMP Sat Feb 19 23:42:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux - which I think is RHEL 5?) then I'd be grateful. Incidentally, does this repolist look right? repo id repo name status CentALT CentALT Packages for Enterprise Linux 5 - x86_64 enabled: 112+157 amzn-main amzn-main-Base enabled: 2,706 amzn-main-debuginfo amzn-main-debuginfo disabled amzn-main-nosrc amzn-main-nosrc disabled amzn-updates amzn-updates-Base enabled: 328 amzn-updates-debuginfo amzn-updates-debuginfo disabled amzn-updates-nosrc amzn-updates-nosrc disabled devel_languages_python Python and Python Modules (SLE_10) enabled: 1,452+768 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 5,892+604 epel-debuginfo Extra Packages for Enterprise Linux 5 - x86_64 - Debug disabled epel-source Extra Packages for Enterprise Linux 5 - x86_64 - Source disabled epel-testing Extra Packages for Enterprise Linux 5 - Testing - x86_64 disabled epel-testing-debuginfo Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Debug disabled epel-testing-source Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Source disabled s3tools Tools for managing Amazon S3 - Simple Storage Service (RHEL_6) enabled: 2+1 repolist: 10,492

    Read the article

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • How to get rid of a stubborn 'removed' device in mdadm

    - by T.J. Crowder
    One of my server's drives failed and so I removed the failed drive from all three relevant arrays, had the drive swapped out, and then added the new drive to the arrays. Two of the arrays worked perfectly. The third added the drive back as a spare, and there's an odd "removed" entry in the mdadm details. I tried both mdadm /dev/md2 --remove failed and mdadm /dev/md2 --remove detached as suggested here and here, neither of which complained, but neither of which had any effect, either. Does anyone know how I can get rid of that entry and get the drive added back properly? (Ideally without resyncing a third time, I've already had to do it twice and it takes hours. But if that's what it takes, that's what it takes.) The new drive is /dev/sda, the relevant partition is /dev/sda3. Here's the detail on the array: # mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Oct 26 12:27:49 2011 Raid Level : raid1 Array Size : 729952192 (696.14 GiB 747.47 GB) Used Dev Size : 729952192 (696.14 GiB 747.47 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 12 17:48:53 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : 2fdbf68c:d572d905:776c2c25:004bd7b2 (local to host blah) Events : 0.34665 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - spare /dev/sda3 If it's relevant, it's a 64-bit server. It normally runs Ubuntu, but right now I'm in the data centre's "rescue" OS, which is Debian 7 (wheezy). The "removed" entry was there the last time I was in Ubuntu (it won't, currently, boot from the disk), so I don't think that's not some Ubuntu/Debian conflict (and they are, of course, closely related). Update: Having done extensive tests with test devices on a local machine, I'm just plain getting anomalous behavior from mdadm with this array. For instance, with /dev/sda3 removed from the array again, I did this: mdadm /dev/md2 --grow --force --raid-devices=1 And that got rid of the "removed" device, leaving me just with /dev/sdb3. Then I nuked /dev/sda3 (wrote a file system to it, so it didn't have the raid fs anymore), then: mdadm /dev/md2 --grow --raid-devices=2 ...which gave me an array with /dev/sdb3 in slot 0 and "removed" in slot 1 as you'd expect. Then mdadm /dev/md2 --add /dev/sda3 ...added it — as a spare again. (Another 3.5 hours down the drain.) So with the rebuilt spare in the array, given that mdadm's man page says RAID-DEVICES CHANGES ... When the number of devices is increased, any hot spares that are present will be activated immediately. ...I grew the array to three devices, to try to activate the "spare": mdadm /dev/md2 --grow --raid-devices=3 What did I get? Two "removed" devices, and the spare. And yet when I do this with a test array, I don't get this behavior. So I nuked /dev/sda3 again, used it to create a brand-new array, and am copying the data from the old array to the new one: rsync -r -t -v --exclude 'lost+found' --progress /mnt/oldarray/* /mnt/newarray This will, of course, take hours. Hopefully when I'm done, I can stop the old array entirely, nuke /dev/sdb3, and add it to the new array. Hopefully, it won't get added as a spare!

    Read the article

  • How to diagnose repeated "Starting up database '<dbname>'"

    - by Richard Slater
    I have a SQL 2008 server which is predominantly used as a development server, in the last two weeks it has been having occasional "fits", I have isolated the cause of these fits as CHECKDB being run almost continuiously, the following log information is logged to the Windows Event Log (Source: MSSQLSERVER, Category: Server): Event: 1073758961, Message: Starting up database 'DBName1'. Event: 1073758961, Message: Starting up database 'DBName2'. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. Event: 1073759397, Message: CHECKDB for database 'DBName1' finished without errors on 2010-07-19 20:29:26.993 (local time). This is an informational message only; no user action is required. This is repeated every 1-2 seconds untill SQL Server is restarted or the offending databases are detatched. I initially thought that it was a problem with the databases so I took a backup and restored them to a SQL Express instance, all of the data is in tact, and CHECKDB runs without problem. The two databases that were causing a problem last week were not being used; so I took full backups of them and detached the databases, this resolved the problem. However at 0100 GMT this morning to other totally unrelated databases started showing the same problems. There is nothing in the event log to suggest that something happened to the server such as a restart, there are no messages about processes crashing or issues being detected with the storage controller. Speaking to the owner of the company this computer has suffered from "gremlins" in the past, however advice was taken and the motherboard was replaced and the computer rebuilt, memory and processor are the same. Stats: O/S: Windows 2008 Standard Build 6002 CPU: 2x Pentium Dual-Core E5200 @ 2.5GHz RAM: 2GB SQL: 2008 Standard 10.0.2531 Edit: someone posted then deleted a comment about AutoClose, it was turned on on the databases affected. It seems that best practice is to disable it so I have done that with the folllowing. EXECUTE sp_MSforeachdb 'IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model'')) EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')' I won't know if the problem recurs for some time so I am still open to further answers.

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    Hello, I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • How to run Django 1.3/1.4 on uWSGI on nginx on EC2 (Apache2 works)

    - by Tadeck
    I am posting a question on behalf of my administrator. Basically he wants to set up Django app (made on Django 1.3, but will be moving to Django 1.4, so it should not really matter which one of these two will work, I hope) on WSGI on nginx, installed on Amazon EC2. The app runs correctly when Django's development server is used (with ./manage.py runserver 0.0.0.0:8080 for example), also Apache works correctly. The only problem is with nginx and it looks there is something else wrong with nginx / WSGI or Django configuration. His description is as follows: Server has been configured according to many tutorials, but unfortunately Nginx and uWSGI still do not work with application. ProjectName.py: import os, sys, wsgi os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProjectName.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() I run uWSGI by comand: uwsgi -x /etc/uwsgi/apps-enabled/projectname.xml XML file: <uwsgi> <chdir>/home/projectname</chdir> <pythonpath>/usr/local/lib/python2.7</pythonpath> <socket>127.0.0.1:8001</socket> <daemonize>/var/log/uwsgi/proJectname.log</daemonize> <processes>1</processes> <uid>33</uid> <gid>33</gid> <enable-threads/> <master/> <vacuum/> <harakiri>120</harakiri> <max-requests>5000</max-requests> <vhost/> </uwsgi> In logs from uWSGI: *** no app loaded. going in full dynamic mode *** In logs from Nginx: XXX.com [pid: XXX|app: -1|req: -1/1] XXX.XXX.XXX.XXX () {48 vars in 989 bytes} [Date] GET / => generated 46 bytes in 77 m secs (HTTP/1.1 500) 2 headers in 63 bytes (0 switches on core 0) added /usr/lib/python2.7/ to pythonpath. Traceback (most recent call last): File "./ProjectName.py", line 26, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named wsgi unable to load app SCRIPT_NAME=XXX.com| Example tutorials that were used: http://projects.unbit.it/uwsgi/wiki/RunOnNginx https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/ Do you have any idea what has been done incorrectly, or what should be done to make Django work on uWSGI on nginx on EC2?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >