Search Results

Search found 6654 results on 267 pages for 'socket io'.

Page 186/267 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • cyrus-imapd is not work with sasldb2, but postfix work

    - by Felix Chang
    centos6 64 bits: when i use pop3 for access cyrus-imapd: S: +OK li557-53 Cyrus POP3 v2.3.16-Fedora-RPM-2.3.16-6.el6_2.5 server ready <3176565056.1354071404@li557-53> C: USER [email protected] S: +OK Name is a valid mailbox C: PASS abcabc S: -ERR [AUTH] Invalid login C: QUIT and with USER "abc" failed too. my imapd.conf: configdirectory: /var/lib/imap partition-default: /var/spool/imap admins: cyrus sievedir: /var/lib/imap/sieve sendmail: /usr/sbin/sendmail hashimapspool: true sasl_pwcheck_method: auxprop sasl_mech_list: PLAIN LOGIN tls_cert_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_key_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_ca_file: /etc/pki/tls/certs/ca-bundle.crt allowplaintext: true #defaultdomain: myabc.com loginrealms: myabc.com sasldblistuser2: [email protected]: userPassword but my postfix is ok with same user. /etc/sasl2/smtpd.conf pwcheck_method: auxprop mech_list: plain login log_level:7 saslauthd_path:/var/run/saslauthd/mux /etc/postfix/main.cf queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = localhost mydomain = myabc.com myorigin = $mydomain inet_interfaces = all inet_protocols = all mydestination = $myhostname, localhost.$mydomain, localhost,$mydomain local_recipient_maps = unknown_local_recipient_reject_code = 550 mynetworks_style = subnet mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = $mydestination alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases home_mailbox = Maildir/ mailbox_transport = lmtp:unix:/var/lib/imap/socket/lmtp debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_security_options = noanonymous smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_security_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination message_size_limit = 15728640 broken_sasl_auth_clients=yes please help.

    Read the article

  • Thunderbird doesn't show folders on a new Dovecot install

    - by Zoran Zaric
    Hey, I set up a new mailserver with postfix and Dovecot some days ago, everything is working except for Thunderbird not showing any folders. Evolution shows me all folders. I migrated from a Courier install using imapsync. In the filesystem the folders don't have a INBOX in their name, so the tho folders ar called .Folder 1 not .INBOX.Folder 1. This is the output of dovecot -n: # 1.0.10: /etc/dovecot/dovecot.conf Warning: mail_extra_groups setting was often used insecurely so it is now deprecated, use mail_access_groups or mail_privileged_group instead base_dir: /var/run/dovecot/ log_timestamp: “%Y-%m-%d %H:%M:%S ” protocols: imap pop3 listen(default): *:143 listen(imap): *:143 listen(pop3): *:110 disable_plaintext_auth: no login_dir: /var/run/dovecot//login login_executable(default): /usr/lib/dovecot/imap-login login_executable(imap): /usr/lib/dovecot/imap-login login_executable(pop3): /usr/lib/dovecot/pop3-login first_valid_uid: 1001 last_valid_uid: 1001 mail_extra_groups: vmail mail_access_groups: vmail mail_location: maildir:/var/vmail/%d/%u maildir_copy_with_hardlinks: yes mail_executable(default): /usr/lib/dovecot/imap mail_executable(imap): /usr/lib/dovecot/imap mail_executable(pop3): /usr/lib/dovecot/pop3 mail_plugin_dir(default): /usr/lib/dovecot/modules/imap mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 pop3_uidl_format(default): pop3_uidl_format(imap): pop3_uidl_format(pop3): %08Xu%08Xv auth default: user: nobody passdb: driver: sql args: /etc/dovecot/dovecot-sql.conf userdb: driver: sql args: /etc/dovecot/dovecot-sql.conf socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 432 user: vmail group: vmail Thanks!

    Read the article

  • Recommended setting for using Apache mod_mono with a different user

    - by Korrupzion
    Hello, I'm setting up an ASP.net script in my linux machine using mod_mono. The script spawn procceses of a bin that belongs to another user, but the proccess is spawned by www-data because apache runs with that user, and i need to spawn the proccess with the user that owns the file. I tried setuid bit but it doesn't make any effect. I discovered that if I kill mod-mono-server2.exe and I run it with the user that I need, everything works right, but I want to know the proper way to do this, because after a while apache runs mod-mono-server2.exe as www-data again. Mono-Project webpage says: How can I Run mod-mono-server as a different user? Due to apache's design, there is no straightforward way to start processes from inside of a apache child as a specific user. Apache's SuExec wrapper is targeting CGI and is useless for modules. Mod_mono provides the MonoStartXSP option. You can set it to "False" and start mod-mono-server manually as the specific user. Some tinkering with the Unix socket's permissions might be necessary, unless MonoListenPort is used, which turns on TCP between mod_mono and mod-mono-server. Another (very risky) way: use a setuid 'root' wrapper for the mono executable, inspired by the sources of Apache's SuExec. I want to know how to use the setuid wrapper, because I tried adding the setuid to 'mono' bin and changing the owner to the user that I want, but that made mono crash. Or maybe a way to keep running mono-mod-server2.exe separated from apache without being closed (anyone has a script?) My environment: Debian Lenny 2.6.26-2-amd64 Mono 1.9.1 mod_mono from debian repository Dedicated server (root access and stuff) Using apache vhosts -I use mono for only that script Thanks!

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • Building new computer, turns on, but no post

    - by addybojangles
    Pardon my ignorance here, finally decided to put together a computer and egads. I purchased a new motherboard, power supply, processor, video card and memory. ASUS M4A79XTD EVO AM3 AMD 790X ATX AMD Motherboard OCZ Fatal1ty OCZ550FTY 550W ATX12V v2.2 / EPS12V SLI Ready 80 PLUS Certified Modular Active PFC Power Supply AMD Phenom II X4 965 Black Edition Deneb 3.4GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 125W Quad-Core Processor XFX HD-577A-ZNFC Radeon HD 5770 (Juniper XT) 1GB 128-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFireX Support Video Card G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ (originally had links for you guys, but I lack the rep, sorry!!) And I've got it all in the tower. I put in power supply, installed processor on motherboard, installed heatsink, put in ram, and I am using an older IDE hard disk. When I start the computer, the monitor tells me "check signal cable." As far as I can tell, the heatsink on the processor is spinning, the power supply is on (obviously), and the green LED on the motherboard is on. I originally only had the bigger output plugged in to the motherboard (what I saw in a YouTube vid as well as the mobo instructions), but after doing some research, it said plug in the other ATX power supply. Which I did. And trying to power the computer results in nothing. No beeps on startup, no post, anyone have any ideas? Your ideas and help is greatly appreciated.

    Read the article

  • Xen HVM networking wont work

    - by Nathan
    I'm trying to get a Xen HVM network working using route however I am failing. Xen PV works fine using Ubuntu but when installing Ubuntu on HVM it fails to pick up the network. I'll let you know now that I'm not that experienced with Xen so I would appreciate any help. vm104 is the HVM thats causing me the problems, here is the configs that I believe should help resolve the problem. [root@eros vm104]# cat vm104.cfg import os, re arch = os.uname()[4] if re.search('64', arch): arch_libdir = 'lib64' else: arch_libdir = 'lib' kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' memory = 6000 shadow_memory = '8' cpu_weight = 256 name = 'vm104' vif = ['type=ioemu, ip=85.25.x.y, vifname=vifvm104.0, mac=00:16:3e:52:3d:fe, bridge=xenbr0'] acpi = 1 apic = 1 vnc = 1 vcpus = 4 vncdisplay = 3 vncviewer = 0 vncconsole = 1 vnclisten = '217.118.x.y' vncpasswd = 'kCfb5S4tE7' serial = 'pty' disk = ['phy:/dev/vpsvg/vm104_img,hda,w', 'file:/home/solusvm/xen/iso/Windows-Server-2008-RC2.iso,hdc:cdrom,r'] device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm' boot = 'cd' sdl = '0' usbdevice = 'tablet' pae=1 [root@eros /]# cat /etc/xen/xend-config.sxp | egrep -v "(^#.*|^$)" (xend-unix-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-route) (vif-script vif-route) (network-script 'network-route netdev=eth0') (dom0-min-mem 256) (dom0-cpus 0) (vnc-listen '0.0.0.0') (vncpasswd '') (keymap 'en-us') The Windows install will not pick up the network - I've tried setting the IP manually by using the Xen servers IP as the gateway and setting the main IP in Windows but no luck. If anyone needs any more information let me know and I appreciate any input!

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

  • Installing SATA dvd burner on machine with no spare SATA ports/connectors

    - by Faheem Mitha
    Greetings. I have the following motherboard Tyan Thunder K8WE S2895A2NRF Motherboard - extended ATX - nForce Pro 2200/2050 - Socket 940 - UDMA133, Serial ATA-300 (RAID) - 2 x Gigabit Ethernet - FireWire - 6-1 channel audio This is part of a computer that was assembled in the winter of 2006/2007. The user manual says the following with regard to SATA Integrated SATAII Generation 1 Controllers (from NForce Professional 2200) Two integrated dual port SATA II controllers Four SATA connectors support up to four drives 3 Gb/s per direction per channel NvRAID v2.0 support Supports RAID 0, 1, 0+1 and JBOD. I just purchased a SATA DVD burner. Here is the page for the product http://www.amazon.ca/gp/product/B002QGDWLK/ The problem I am facing is that I already have 4 SATA drives installed. I don't want to remove any of them. However, I want the DVD burner above installed as well. The person I am consulting with here (Bombay, India) tells me that my four available SATA ports are filled, and that my only option is to install a SATA card into the one free PCI slot on the motherboard. However, he says that with this setup I will not be able to boot from the DVD drive. Are these statements correct, and what are my other options if any? Even it the statements in the last para are true, I suppose I could use one of the motherboard connectors/ports there are currently being used with the hard drives with the DVD drive, and use the "add-on" connector with one of the hard drives. Not all the 4 hard drives need to be bootable. BTW, despite having read through http://en.wikipedia.org/wiki/Serial_ATA#Cables.2C_connectors.2C_and_ports I am fuzzy on the differences between connectors, cables and ports. Thanks in advance.

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • dnsmasq(as DHCP server) isn't working in KVM+libvirt envirmont

    - by user2681054
    I'm using dnsmasq as DHCP server in VM environment. But It didn't working. I disabled basic DHCP feature in libvirt. <network> <name>default</name> <uuid>84da0678-e56d-8fc2-6f8b-e8eba784849a</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0' /> <mac address='52:54:00:7B:64:0B'/> <ip address='192.168.122.1' netmask='255.255.255.0'> </ip> </network> As you can see, I removed this tag! <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> </dhcp> And I installed dnsmasq in Host machine. During installation dnsmasq, there was an error message about 127.0.0.1.(dnsmasq: failed to create listening socket for 127.0.0.1) So I commented out listen-address option, and added dhcp-range/dhcp-option options, like this. listen-address=127.0.0.1 dhcp-range=192.168.122.100,192.168.122.200,24h dhcp-option=option:router,192.168.122.1 That's all I've done with dnsmasq. But guest VM couldn't get IP address from host which is dnsmasq server running. After that , I installed isc-dhcp-server instead of dnsmasq.... and it works! But I still want to use dnsmasq instead of isc-dhcp-server. Are there any helping hands? I disabled host machine's firewall. I've heard that libvirt basically use dnsmasq. Is this the reason why I couldn't use dnsmasq in libvirt environment?

    Read the article

  • Cant connect to MySQL server from Java application

    - by RN
    This is on VPS\Centos server. The MySQL server is pre configured. I am running the Java application on Tomcat My Java web application is not able to connect to the MySQL server. I get an error - "Caused by: java.net.ConnectException: Connection refused" I suspect this to be a configuration problem rather than a coding problem- hence I have posted this on ServerFault And yes, The same web-app is able to connect to MySQL on a different linux box This is the URL that I provided to my Java application (note- it assumes default port) url = "jdbc:mysql://localhost/pickupgames" My first suspicion was that I am running on a non-default port So I tried to find the port where mySQL server is running I tried every trick mentioned in http://serverfault.com/questions/116100/how-to-check-what-port-mysql-is-running-on But no luck ! SHOW GLOBAL VARIABLES LIKE 'PORT'; This shows port 0 netstat -tlnp doesn't show mysql at all /etc/my.cnf It has no port entry telnet localhost 3306 Doesn't connect And in case you are wondering if mysql server is running at all or not It is And I know for sure, because I have been able to login using the mysql command Also # ps -ef|grep 'mysql' root 31839 27662 0 00:49 pts/3 00:00:00 grep mysql root 32452 1 0 Apr02 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --skip-grant-tables --skip-networking mysql 32504 32452 0 Apr02 ? 00:00:06 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock --skip-grant-tables --skip-networking Please note the --skip-networking parameter Does this have something to do with the issue ? Any explanation why I cant connect to mysql server on port 3306 by telnet? Or why it docent show up under netstat? Any suggestion on whet I should try next ?

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • Can't Connect To Local Mysql Using IP Address, but CAN connect from remote server

    - by user1782041
    Here's an interesting one that does not seem to fall into any of the mysql connection issues I've read about or searched for: On an Ubuntu 12.04 box I had some system updates waiting to install, and I took care of that this evening. After the install, I started seeing some errors in my syslog complaining about a particular php script that could no longer connect to the mysql instance on the box. Here is the specific error: PHP Warning: mysql_connect(): Can't connect to MySQL server on '192.168.0.40' (4) Now, the server's IP address is 192.168.0.40, and I've checked to make sure that I have mysql listening on 0.0.0.0 so that I can connect using either "localhost" or "192.168.0.40". Here's where things get odd: From the local machine, if I try the following: mysql -uroot -p -h192.168.0.40 I get this error: ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.40' (110) I've checked, and error 110 indicates an OS timeout, and error 2003 is the mysql generic "can't connect" error. This indicates that it is not permissions with the user. However, if I do the same thing from a remote machine (say, from 192.168.0.30), I log right in with no problems. Futher, other scripts on the local machine that connect to mysql using "localhost" for the host rather than "192.168.0.40" connect with no problems. Also, I can connect via the mysql socket with no problems both from the command line and php scripts. So, this feels like a networking issue of some kind on the local box, but there are no iptables rules on this box (it is firewalled externally) and I can't figure out what else may be causing this. This problematic script worked perfectly prior to the latest system update. For now, I'll simply change the script to connect via localhost, but I'd really like to know why it broke for 2 reasons: There may be other scripts that connect using 192.168.0.40 that don't run very often which are now broken. Auditing them all will take more time than I feel like devoting at the moment. I'm curious, and want to know why it broke so I can fix it correctly. Any help?

    Read the article

  • WAMP server not starting and a mystery service is using port 80

    - by David Gard
    I'm using WAMP on Windows 7 and tonight I'm unable to start it (it's been working fine for about 3 years previously). I first checked the logs, but nothing in there. So I checked the Windows Event Viewer (eventvwr), where I saw 2x errors for each attempted start - The Apache service named reported the following error: (OS 10048)Only one usage of each socket address (protocol/network address/port) is normally permitted. : make_sock: could not bind to address 0.0.0.0:80. The Apache service named reported the following error: no listening sockets available, shutting down. After this, I opened the command prompt and ran netstat -a, and to my surprise there were 3 entries for port 80 - Prot   Local Address  Foreign Address   State TCP   127.0.0.1:80       MyPCName:53831   TIME_WAIT TCP   127.0.0.1:80       MyPCName:57834   TIME_WAIT TCP   127.0.0.1:80       MyPCName:57839   TIME_WAIT From here, I'm kind of lost. As far as I can tell, something is utalising port 80, but I don't know what or why. If anyone is able to suggest what I might do next I'd be most greatful. Thanks.

    Read the article

  • General logging won't work in MySQL

    - by leonstr
    I saw on SF that there's an option in MySQL to log all queries. So, in my version (mysql-server-5.0.45-7.el5 on CentOS 5.2) this appears to be a case of enabling the 'log' option, so I edited /etc/my.cnf to add this: [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords= log=/var/log/mysql-general.log [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I then created the file and set permissions: # touch /var/log/mysql-general.log # chown mysql. /var/log/mysql-general.log # ls -l /var/log/mysql-general.log -rw-r--r-- 1 mysql mysql 0 Jan 18 15:22 /var/log/mysql-general.log But when I start mysqld I get: 120118 15:24:18 mysqld started ^G/usr/libexec/mysqld: File '/var/log/mysql-general.log' not found (Errcode: 13) 120118 15:24:18 [ERROR] Could not use /var/log/mysql-general.log for logging (error 13). Turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it. 120118 15:24:18 InnoDB: Started; log sequence number 0 182917764 120118 15:24:18 [Note] /usr/libexec/mysqld: ready for connections. Can anyone suggest why this isn't working?

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    Good Evening, and thank you for reading this post. I am having a problem with Django after migrating the dB from SQLlite to MySQL. Initially, for the first 48hours, all ran well. But now we are experiencing high cpu about every 30 minutes. This is a production ESX4i VM host, with 2 x 2.8 ghz CPUs and 12 GB ram. I have allocated 4 cpu's to this VM and 4 GB memory. Any insight into this configuration and help with the spikes in CPU would be appreciated. IT is configured to use the prefork MPM. Outlined are the config's for the different services: MySQL Server version: 5.1.61 Source distribution Django 1.3 mod_wsgi Apache/2.2.15 httpd.conf Timeout 120 KeepAlive Off MaxKeepAliveRequests 400 KeepAliveTimeout 3 prefork MPM StartServers 8 MinSpareServers 8 MaxSpareServers 16 ServerLimit 40 MaxClients 40 MaxRequestsPerChild 0 worker MPM StartServers 16 MaxClients 1024 MinSpareThreads 64 MaxSpareThreads 256 ThreadsPerChild 64 MaxRequestsPerChild 10240 MySQL my.conf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid my.cnf wsgi.conf LoadModule wsgi_module modules/mod_wsgi.so /etc/httpd/conf.d/wsgi.conf WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess SITE maximum-requests=10000 WSGIProcessGroup SITE

    Read the article

  • What's up with OCFS2?

    - by wcoekaer
    On Linux there are many filesystem choices and even from Oracle we provide a number of filesystems, all with their own advantages and use cases. Customers often confuse ACFS with OCFS or OCFS2 which then causes assumptions to be made such as one replacing the other etc... I thought it would be good to write up a summary of how OCFS2 got to where it is, what we're up to still, how it is different from other options and how this really is a cool native Linux cluster filesystem that we worked on for many years and is still widely used. Work on a cluster filesystem at Oracle started many years ago, in the early 2000's when the Oracle Database Cluster development team wrote a cluster filesystem for Windows that was primarily focused on providing an alternative to raw disk devices and help customers with the deployment of Oracle Real Application Cluster (RAC). Oracle RAC is a cluster technology that lets us make a cluster of Oracle Database servers look like one big database. The RDBMS runs on many nodes and they all work on the same data. It's a Shared Disk database design. There are many advantages doing this but I will not go into detail as that is not the purpose of my write up. Suffice it to say that Oracle RAC expects all the database data to be visible in a consistent, coherent way, across all the nodes in the cluster. To do that, there were/are a few options : 1) use raw disk devices that are shared, through SCSI, FC, or iSCSI 2) use a network filesystem (NFS) 3) use a cluster filesystem(CFS) which basically gives you a filesystem that's coherent across all nodes using shared disks. It is sort of (but not quite) combining option 1 and 2 except that you don't do network access to the files, the files are effectively locally visible as if it was a local filesystem. So OCFS (Oracle Cluster FileSystem) on Windows was born. Since Linux was becoming a very important and popular platform, we decided that we would also make this available on Linux and thus the porting of OCFS/Windows started. The first version of OCFS was really primarily focused on replacing the use of Raw devices with a simple filesystem that lets you create files and provide direct IO to these files to get basically native raw disk performance. The filesystem was not designed to be fully POSIX compliant and it did not have any where near good/decent performance for regular file create/delete/access operations. Cache coherency was easy since it was basically always direct IO down to the disk device and this ensured that any time one issues a write() command it would go directly down to the disk, and not return until the write() was completed. Same for read() any sort of read from a datafile would be a read() operation that went all the way to disk and return. We did not cache any data when it came down to Oracle data files. So while OCFS worked well for that, since it did not have much of a normal filesystem feel, it was not something that could be submitted to the kernel mail list for inclusion into Linux as another native linux filesystem (setting aside the Windows porting code ...) it did its job well, it was very easy to configure, node membership was simple, locking was disk based (so very slow but it existed), you could create regular files and do regular filesystem operations to a certain extend but anything that was not database data file related was just not very useful in general. Logfiles ok, standard filesystem use, not so much. Up to this point, all the work was done, at Oracle, by Oracle developers. Once OCFS (1) was out for a while and there was a lot of use in the database RAC world, many customers wanted to do more and were asking for features that you'd expect in a normal native filesystem, a real "general purposes cluster filesystem". So the team sat down and basically started from scratch to implement what's now known as OCFS2 (Oracle Cluster FileSystem release 2). Some basic criteria were : Design it with a real Distributed Lock Manager and use the network for lock negotiation instead of the disk Make it a Linux native filesystem instead of a native shim layer and a portable core Support standard Posix compliancy and be fully cache coherent with all operations Support all the filesystem features Linux offers (ACL, extended Attributes, quotas, sparse files,...) Be modern, support large files, 32/64bit, journaling, data ordered journaling, endian neutral, we can mount on both endian /cross architecture,.. Needless to say, this was a huge development effort that took many years to complete. A few big milestones happened along the way... OCFS2 was development in the open, we did not have a private tree that we worked on without external code review from the Linux Filesystem maintainers, great folks like Christopher Hellwig reviewed the code regularly to make sure we were not doing anything out of line, we submitted the code for review on lkml a number of times to see if we were getting close for it to be included into the mainline kernel. Using this development model is standard practice for anyone that wants to write code that goes into the kernel and having any chance of doing so without a complete rewrite or.. shall I say flamefest when submitted. It saved us a tremendous amount of time by not having to re-fit code for it to be in a Linus acceptable state. Some other filesystems that were trying to get into the kernel that didn't follow an open development model had a lot harder time and a lot harsher criticism. March 2006, when Linus released 2.6.16, OCFS2 officially became part of the mainline kernel, it was accepted a little earlier in the release candidates but in 2.6.16. OCFS2 became officially part of the mainline Linux kernel tree as one of the many filesystems. It was the first cluster filesystem to make it into the kernel tree. Our hope was that it would then end up getting picked up by the distribution vendors to make it easy for everyone to have access to a CFS. Today the source code for OCFS2 is approximately 85000 lines of code. We made OCFS2 production with full support for customers that ran Oracle database on Linux, no extra or separate support contract needed. OCFS2 1.0.0 started being built for RHEL4 for x86, x86-64, ppc, s390x and ia64. For RHEL5 starting with OCFS2 1.2. SuSE was very interested in high availability and clustering and decided to build and include OCFS2 with SLES9 for their customers and was, next to Oracle, the main contributor to the filesystem for both new features and bug fixes. Source code was always available even prior to inclusion into mainline and as of 2.6.16, source code was just part of a Linux kernel download from kernel.org, which it still is, today. So the latest OCFS2 code is always the upstream mainline Linux kernel. OCFS2 is the cluster filesystem used in Oracle VM 2 and Oracle VM 3 as the virtual disk repository filesystem. Since the filesystem is in the Linux kernel it's released under the GPL v2 The release model has always been that new feature development happened in the mainline kernel and we then built consistent, well tested, snapshots that had versions, 1.2, 1.4, 1.6, 1.8. But these releases were effectively just snapshots in time that were tested for stability and release quality. OCFS2 is very easy to use, there's a simple text file that contains the node information (hostname, node number, cluster name) and a file that contains the cluster heartbeat timeouts. It is very small, and very efficient. As Sunil Mushran wrote in the manual : OCFS2 is an efficient, easily configured, quickly installed, fully integrated and compatible, feature-rich, architecture and endian neutral, cache coherent, ordered data journaling, POSIX-compliant, shared disk cluster file system. Here is a list of some of the important features that are included : Variable Block and Cluster sizes Supports block sizes ranging from 512 bytes to 4 KB and cluster sizes ranging from 4 KB to 1 MB (increments in power of 2). Extent-based Allocations Tracks the allocated space in ranges of clusters making it especially efficient for storing very large files. Optimized Allocations Supports sparse files, inline-data, unwritten extents, hole punching and allocation reservation for higher performance and efficient storage. File Cloning/snapshots REFLINK is a feature which introduces copy-on-write clones of files in a cluster coherent way. Indexed Directories Allows efficient access to millions of objects in a directory. Metadata Checksums Detects silent corruption in inodes and directories. Extended Attributes Supports attaching an unlimited number of name:value pairs to the file system objects like regular files, directories, symbolic links, etc. Advanced Security Supports POSIX ACLs and SELinux in addition to the traditional file access permission model. Quotas Supports user and group quotas. Journaling Supports both ordered and writeback data journaling modes to provide file system consistency in the event of power failure or system crash. Endian and Architecture neutral Supports a cluster of nodes with mixed architectures. Allows concurrent mounts on nodes running 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. In-built Cluster-stack with DLM Includes an easy to configure, in-kernel cluster-stack with a distributed lock manager. Buffered, Direct, Asynchronous, Splice and Memory Mapped I/Os Supports all modes of I/Os for maximum flexibility and performance. Comprehensive Tools Support Provides a familiar EXT3-style tool-set that uses similar parameters for ease-of-use. The filesystem was distributed for Linux distributions in separate RPM form and this had to be built for every single kernel errata release or every updated kernel provided by the vendor. We provided builds from Oracle for Oracle Linux and all kernels released by Oracle and for Red Hat Enterprise Linux. SuSE provided the modules directly for every kernel they shipped. With the introduction of the Unbreakable Enterprise Kernel for Oracle Linux and our interest in reducing the overhead of building filesystem modules for every minor release, we decide to make OCFS2 available as part of UEK. There was no more need for separate kernel modules, everything was built-in and a kernel upgrade automatically updated the filesystem, as it should. UEK allowed us to not having to backport new upstream filesystem code into an older kernel version, backporting features into older versions introduces risk and requires extra testing because the code is basically partially rewritten. The UEK model works really well for continuing to provide OCFS2 without that extra overhead. Because the RHEL kernel did not contain OCFS2 as a kernel module (it is in the source tree but it is not built by the vendor in kernel module form) we stopped adding the extra packages to Oracle Linux and its RHEL compatible kernel and for RHEL. Oracle Linux customers/users obviously get OCFS2 included as part of the Unbreakable Enterprise Kernel, SuSE customers get it by SuSE distributed with SLES and Red Hat can decide to distribute OCFS2 to their customers if they chose to as it's just a matter of compiling the module and making it available. OCFS2 today, in the mainline kernel is pretty much feature complete in terms of integration with every filesystem feature Linux offers and it is still actively maintained with Joel Becker being the primary maintainer. Since we use OCFS2 as part of Oracle VM, we continue to look at interesting new functionality to add, REFLINK was a good example, and as such we continue to enhance the filesystem where it makes sense. Bugfixes and any sort of code that goes into the mainline Linux kernel that affects filesystems, automatically also modifies OCFS2 so it's in kernel, actively maintained but not a lot of new development happening at this time. We continue to fully support OCFS2 as part of Oracle Linux and the Unbreakable Enterprise Kernel and other vendors make their own decisions on support as it's really a Linux cluster filesystem now more than something that we provide to customers. It really just is part of Linux like EXT3 or BTRFS etc, the OS distribution vendors decide. Do not confuse OCFS2 with ACFS (ASM cluster Filesystem) also known as Oracle Cloud Filesystem. ACFS is a filesystem that's provided by Oracle on various OS platforms and really integrates into Oracle ASM (Automatic Storage Management). It's a very powerful Cluster Filesystem but it's not distributed as part of the Operating System, it's distributed with the Oracle Database product and installs with and lives inside Oracle ASM. ACFS obviously is fully supported on Linux (Oracle Linux, Red Hat Enterprise Linux) but OCFS2 independently as a native Linux filesystem is also, and continues to also be supported. ACFS is very much tied into the Oracle RDBMS, OCFS2 is just a standard native Linux filesystem with no ties into Oracle products. Customers running the Oracle database and ASM really should consider using ACFS as it also provides storage/clustered volume management. Customers wanting to use a simple, easy to use generic Linux cluster filesystem should consider using OCFS2. To learn more about OCFS2 in detail, you can find good documentation on http://oss.oracle.com/projects/ocfs2 in the Documentation area, or get the latest mainline kernel from http://kernel.org and read the source. One final, unrelated note - since I am not always able to publicly answer or respond to comments, I do not want to selectively publish comments from readers. Sometimes I forget to publish comments, sometime I publish them and sometimes I would publish them but if for some reason I cannot publicly comment on them, it becomes a very one-sided stream. So for now I am going to not publish comments from anyone, to be fair to all sides. You are always welcome to email me and I will do my best to respond to technical questions, questions about strategy or direction are sometimes not possible to answer for obvious reasons.

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • Nginx all subdomain points to one subdomain (gitlab) rule

    - by Alkimake
    I have installed gitlab on my server and use nginx as http server... I simply used recipe for gitlab on nginx # GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen 192.168.250.81:80; # e.g., listen 192.168.1.1:80; server_name gitlab.xxx.com; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } gitlab.xxx.com works fine and i get gitlab web documents. But if i want another subdomain i use for Jira (jira.xxx.com) on port 80 (i setup jira on 8080 port normally) gets gitlab web site also. How can i restrict this rule only serving for gitlab, or may be i can redirect jira.xxx.com to jira.xxx.com:8080

    Read the article

  • Joining two routers together, but I have no access to the second router, although I know it's IP address and Gateway

    - by JohnnyVegas
    I have temporarily moved into a rented apartment for 4 months, which has wireless. The trouble I am having is that the access points here are wifi only and no RJ45 and I need to use RJ45 to connect some equipment that I am working with. I have purchased an RT-N66U and installed Tomato (shibby ver. 1.28) and successfully replaced the existing access point, but now I want to enable the access point that I have replaced as it links wirelessly to 3 others. Can I plug in a cable from the access point to my RT-N66U and get it to access the internet via my router? I have no access to the existing wireless access point, and don't want to reset it as it's not mine. There is another router situated in the roof somewhere which I also have no access to, but it's supplying my RT-N66U internet and I most definitely have a double-nat, which although isn't the best way of doing things I am limited with what I can do. Any suggestions on routing tables, vlans etc would be helpful, but I have no experience in these fields before - but I know the tomato firmware can cater for this. My router is set to IP 10.0.1.1 and dhcp is 10.0.1.100-200 The wireless access point address was 192.168.1.2 but this was assigned by the router in the roof which has the address 192.168.1.1. There is a cable from this router going to a wall socket which I now have my RT-N66u attached to via the WAN port. I understand it's scruffy and it isn't the way to do things but I have tried to ask for the admin details but as the wireless network is looked after by a third party and nobody knows their details I am stuck with this dilemma. I could buy three wireless access points and replace the existing but this isn't what I want to do, and although I have installed plenty of DD-WRT wireless repeater bridges they simply don't work here for some unknown reason. The phone line here is very noisy too and I don't have the rights to install ADSL in a building that isn't mine, and 3G coverage isn't good enough either. Thanks for your time

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • Ubuntu server PPTPD with OS X clients Problems

    - by Nakedsteve
    I'm trying to get a PPTP server running on a ubuntu server, but I've run into some issues with it. I followed this guide on how to set up pptpd on my server, and everything went smooth, but when I try to connect with my mac, it gives me this error: Here's my configuration: Does anyone have any idea as to what I'm doing wrong here? Update: Here's what the pptpd.log has to say about it: steve@debian:~$ sudo tail /var/log/pptpd.log sudo: unable to resolve host debian Sep 3 21:46:43 debian pptpd[2485]: MGR: Manager process started Sep 3 21:46:43 debian pptpd[2485]: MGR: Maximum of 11 connections available Sep 3 21:46:43 debian pptpd[2485]: MGR: Couldn't create host socket Sep 3 21:46:43 debian pptpd[2485]: createHostSocket: Address already in use Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection started Sep 3 21:46:56 debian pptpd[2486]: CTRL: Starting call (launching pppd, opening GRE) Sep 3 21:46:56 debian pptpd[2486]: GRE: read(fd=6,buffer=204d0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Sep 3 21:46:56 debian pptpd[2486]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Sep 3 21:46:56 debian pptpd[2486]: CTRL: Reaping child PPP[2487] Sep 3 21:46:56 debian pptpd[2486]: CTRL: Client 192.168.1.101 control connection finished My pptpd options are: asyncmap 0 noauth crtscts lock hide-password modem debug proxyarp lcp-echo-interval 30 lcp-echo-failure 4 nopix

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >