Search Results

Search found 14310 results on 573 pages for 'mysql sock'.

Page 559/573 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • Dovecot Virtual Users Not Authenticating

    - by blankabout
    We have a standard Postfix/Dovecot installation working perfectly with real users but cannot work out how to add virtual users, all virtual user login attempts fail with authentication errors. Following are snippets from the configuration files: /etc/postfix/main.cf: virtual_mailbox_domains = virtualexample.com virtual_mailbox_base = /var/spool/vhosts virtual_mailbox_recipients = hash:/etc/postfix/virtual_mailbox_recipients /etc/dovecot/dovecot.conf: !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.conf auth_mechanisms = cram-md5 digest-md5 plain passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } /etc/cram-md5.pwd [email protected]{MD5}$1$uIMvzy92$9Xt67B/qw4u6txkkxzne80 This is a snippet from the log when a login attempt is made: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libmech_gssapi.so auth: Debug: passwd-file /etc/cram-md5.pwd: Read 1 users auth: Debug: auth client connected (pid=21990) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51774 auth: Debug: client out: CONT#0111#011PDI1Njc0NjQ1NzQ3MTY0NTkuMTM0MTIxNzkwN0BncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#0111630404609#01121990#0111#011b66b5f46b520a08e1d19d3d249be7073 auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#0111630404609 imap: Error: Authenticated user not found from userdb, auth lookup id=1630404609 (client-pid=21990 client-id=1) imap-login: Internal login failure (pid=21990 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=21993 auth: Debug: auth client connected (pid=22010) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51775 auth: Debug: client out: CONT#0111#011PDcxMDkwNDY1NTQzODUzMDkuMTM0MTIxNzkyOEBncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#011343539713#01122010#0111#011e47b1345784e2845d59e794afa9a6bbe auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#011343539713 imap: Error: Authenticated user not found from userdb, auth lookup id=343539713 (client-pid=22010 client-id=1) imap-login: Internal login failure (pid=22010 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=22011 It would appear that the user lookup is not working, even tho' the log suggests that Dovecot is using the /etc/cram-md5.pwd file and the user is configured in that same file. There are of course dozens of examples of using virtual users with Dovecot, but all the ones we have found either refer to Dovecot 1.x (we are using 2.x), using only virtual users (we must use real AND virtual users) or want to use a MySQL db, we need to use a text file. Some hints about where we are going wrong would be very much appreciated.

    Read the article

  • Windows Server 2008 Migration - Did I miss something?

    - by DevNULL
    I'm running in to a few complications in my migration process. My main role has been a Linux / Sun administrator for 15 yrs so Windows server 2008 environment is a bit new to me, but understandable. Here's our situation and reason for migrating... We have a group of developers that develop VERY low-level software in Visual C with some inline assembler. All the workstations were separate from each other which cased consistency problems with development libraries, versions, etc... Our goal was to throw them all on to a Windows domain were we can control workstation installations, hot fixes (which can cause enormous problems), software versions, etc... All Development Workstations are running Windows XP x32 (sp3) and x64 (sp2) I running in to user permission problems and I was wondering maybe I missed one, tWO or a handful of things during my deployment. Here is what I have currently done: Installed and Activated Windows Server 2008 Added Roles for DNS and Active Directory Configured DNS with WINS for netbios name usage Added developers to AD and mapped their shared folders to their profile Added roles for IIS7 and configured the developers SVN Installed MySQL Enterprise Edition for development usage Not having a firm understanding of Group Policy I haven't delved deeply in to that realm yet. Problems I'm encountering: 1. When I configure any XP workstations to logon our domain, once a user uses their new AD login, everything goes well, except they have very restrictive permissions. (Eg: If a user opens any existing file, they don't have write access, except in their documents folder.) Since these guys are working on low system level events, they need to r/w all files. All I'm looking to restrict in software installations. Am I correct to assume that I can use WSUS to maintain the domains hot fixes and updates pushed to the workstations? I need to map a centralized shared development drive upon the users login. This is open to EVERYONE. Right now I have the users folders mapped upon login through their AD profile. But how do I map a share if I've already defined one within their profile in AD? Any responses would be very grateful. Do I have to configure and define a group policy for the domain users? Can I use Volume Mirroring to mirror / sync two drives on two separate servers or should I just script a rsync or MS Synctool? The drives simply store nightly system images.

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Postfix / Dovecot and Email Retrieval

    - by Eric J.
    I have setup Postfix and Dovecot on an Ubuntu box following the instructions http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ I can see that email is being delivered to and accepted by the server, but the email is not available for retrieval via POP3. What could be missing in my configuraton? It seems that email is not being properly handed off to Dovecot. Here are what I believe are the relevant /var/log/mail.log entries for an attempt to send email from another domain (hosted by Gmail) to the domain I have setup: Logged during SMTP connection postfix/smtpd[14689]: connect from mail-vb0-f50.google.com[209.85.212.50] postfix/smtpd[14689]: Anonymous TLS connection established from mail-vb0-f50.google.com[209.85.212.50]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) postfix/smtpd[14689]: 5782740ACF: client=mail-vb0-f50.google.com[209.85.212.50] postfix/cleanup[14696]: 5782740ACF: message-id=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com> postfix/qmgr[14687]: 5782740ACF: from=<[email protected]>, size=1947, nrcpt=1 (queue active) postfix/smtpd[14702]: connect from mail.destinationdomain.com[127.0.0.1] postfix/smtpd[14702]: 2940A41AA9: client=mail.destinationdomain.com[127.0.0.1] postfix/cleanup[14696]: 2940A41AA9: message-id=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com> postfix/qmgr[14687]: 2940A41AA9: from=<[email protected]>, size=2450, nrcpt=1 (queue active) amavis[21309]: (21309-02) Passed CLEAN, [209.85.212.50] <[email protected]> -> <[email protected]>, Message-ID: <CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com>, mail_id: W52ZB8FAAA+8, Hits: -0.101, size: 1946, queued_as: 2940A41AA9, [email protected], 784 ms postfix/smtpd[14702]: disconnect from mail.destinationdomain.com[127.0.0.1] postfix/smtp[14698]: 5782740ACF: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=1.1, delays=0.29/0.01/0/0.79, dsn=2.0.0, status=sent (250 2.0.0 from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 2940A41AA9) postfix/qmgr[14687]: 5782740ACF: removed dovecot: lda([email protected]): msgid=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com>: saved mail to INBOX postfix/pipe[14703]: 2940A41AA9: to=<[email protected]>, relay=dovecot, delay=0.08, delays=0.02/0.02/0/0.04, dsn=2.0.0, status=sent (delivered via dovecot service) postfix/qmgr[14687]: 2940A41AA9: removed Logged during POP3 retrieval attempts dovecot: pop3-login: Login: user=<[email protected]>, method=PLAIN, rip=209.85.220.135, lip=10.195.83.10, mpid=14706 dovecot: pop3([email protected]): Disconnected: Logged out top=0/0, retr=1/2557, del=1/1, size=2540 postfix/smtpd[14689]: disconnect from mail-vb0-f50.google.com[209.85.212.50] dovecot: pop3-login: Login: user=<[email protected]>, method=PLAIN, rip=209.85.212.31, lip=10.195.83.10, mpid=14708 dovecot: pop3([email protected]): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0

    Read the article

  • isa 2004 - banned site rule cause slow internet

    - by Holian
    Hi Gods, We have windows server 2003 with isa 2004. Our clients uses internet with proxy. We have two isa rule: order name action protocolls from/listener to condition 1. trafic ALLOW all outbound all networks all networks all users 2. FTP ALLOW FTP Server EXTERNAL/INTERNAL/Local host 10.1.1.1 So we have to "bann" a few webpage (like facebook, youtube...etc...), so we make a new rule 0. banned DENY HTTP internal denied pages all users In the denied pages we have the *.facebook.com domain set. After we enable this rule, the entire internet slows down. The banning rule works well, redirect to an internal site, but the other sites.... If i open a page..it normally takes 3-10 sec to load, but after this rule this time is: 2-4 minutes. In the monitor / logging menu we got a few FAILED CONNECTION ATTEMPT like: Log type: Web Proxy (Forward) Status: 304 Not Modified Rule: All local traffic Source: Internal ( 10.1.1.1:0 ) Destination: External ( 172.24.28.22:3128 ) Request: GET http://www.konyvelozona.hu/wp-content/uploads/nyugdijas-holgy-2.jpg Filter information: Req ID: 17270b72 Protocol: http User: anonymous Additional information Client agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.3072... Object source: Verified Cache Processing time: 9047 Cache info: 0x18801002 MIME type: - In the event log we got a few log: Description: The Web Proxy filter failed to bind its socket to 10.1.1.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d The Web Proxy filter failed to bind its socket to 127.0.0.1 port 80. This may have been caused by another service that is already using the same port or by a network adapter that is not functional. To resolve this issue, restart the Microsoft Firewall service. The error code specified in the data area of the event properties indicates the cause of the failure. The failure is due to error: 0x8007271d If i tpye: netstat -o -n -a | findstr 0.0:80 then i got, tcp 0.0.0.0:80 0.0.0.0:0 LISTEN 4 udp 0.0.0.0:8031 *.* 2780 udp 0.0.0.0:8082 *.* 2780 Some month ago we installed XMAP, but now we only use mysql. Apache service stopped. In the Xamp port check menu i see: Service POrt Status Apache (http) 80 Process: System Maybee this is the problem? I dont know what should i do now... Thank you folks.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • startup Error for Zend Server CE

    - by Jamison
    Hello! I've got a strange startup error for Zend Server CE - it's probably easy to fix, but I don't have much experience with Zend Server! I'm running the latest OSX 10.6.6 and the latest Zend Server CE for Mac. When I run the "start" command from the command line, here is what I get: /usr/local/zend/bin/apachectl start [OK] spawn-fcgi: child spawned successfully: PID: 4206 /usr/local/zend/bin/shell_functions.rc: line 133: 4210 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4211 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Zend Server GUI [Lighttpd] [FAILED] /usr/local/zend/bin/lighttpdctl.sh: line 46: 4212 Bus error $WATCHDOG -i $BINARY Starting MySQL SUCCESS! /usr/local/zend/bin/shell_functions.rc: line 133: 4304 Bus error $WATCHDOG -i $BINARY 1>&3 2>&4 /usr/local/zend/bin/shell_functions.rc: line 133: 4425 Bus error $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 Starting Java bridge [FAILED] /usr/local/zend/bin/java_bridge.sh: line 39: 4426 Bus error $WATCHDOG -i $BINARY Zend Server started... The challenge is that ZEND SERVER wont open the GUI with this error, and seemingly I can click on Zend Server in the Applications folder and it opens for a second and immediately closes. I've made sure that Web Sharing is turned off to avoid conflicts, and I've run Disk Utility from my recovery disk to make sure there are no file system errors. Here is what the lines that are referenced in the errors have in terms of code: shell_functions.rc: (starting on line 132 - the error message says line 133...): launch() { if [ -z "$DEBUG" ]; then exec 3>/dev/null 4>&3 else exec 3>&1 4>&2 fi $WATCHDOG -i $BINARY 1>&3 2>&4 RET=$? if [ $RET -eq 0 ];then $ECHO_CMD "$BINARY watchdog is up and running.. ${OK_COLOR}[OK]${T_RESET}" return $RET else #$WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY >> "$PREFIX/logs/watchdog_$BINARY.log" 2>&1 $WATCHDOG -u $WD_UID -g $WD_GID -s $BINARY 1>&3 2>&4 report $? "Starting" fi } _kill() { $WATCHDOG -i $BINARY > /dev/null 2>&1 if [ $? -eq 1 ];then $ECHO_CMD "$BINARY is not running" else $WATCHDOG -t $BINARY > /dev/null 2>&1 report $? "Stopping" fi } lighttpdctl.sh: (starting on line 45 - the error message says line 46...): status() { $WATCHDOG -i $BINARY } case "$1" in start) start status ;; stop) stop ;; restart) stop sleep 1 start ;; status) status ;; *) usage exit 1 esac exit $? java_bridge.sh: (starting on line 38 - the error message says line 39...): status() { $WATCHDOG -i $BINARY } Question: "Watchdog" is library in this zend BIN folder - it seems to handle error reporting? all the errors in my start command seem to deal with this Watchdog thing, but I don't know what to do about it... Thanks!

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Seeking past end of file causes Apache hang, and it never restarts.

    - by talkingnews
    I've actually solved my problem with a better script, but I'm still left wondering why Apache2 hung completely - this is an out-of-the-box ISPCONFIG 3.03 install, everything bang up to date, running perfectly. Until... The troublesome but innocent-looking script: $fp = fopen("/var/log/ispconfig/cron.log", "r"); fseek($fp, -5000, SEEK_END); $line_buffer = array(); while (!feof($fp)) { $line = fgets($fp, 1024); $line_buffer[] = $line; $line_buffer = array_slice($line_buffer, -10, 10); } foreach ($line_buffer as $line) { echo $line; } You get the idea, just a script I found on a forum somwehere. I did this for various logs, since it's a nice easy window on what's occurring (in a protect dir, of course!). One day, the logs having grown large an me having sorted all my cron, scripting and mail queue errors, I thought I was time to start afresh. updated, rebooted, archived and deleted the logs. When I ran my script a couple of hours later, it hung. And hung. 8 minutes I waited. Chrome timed the page out, of course, but the server never came back to life. htop showed /usr/sbin/apache2 -k restart using 100% CPU. Never came back until I did a service apache2 restart. Ran fine, as soon as I hit that logfile again...dead. So, I worked out it was the logfile script, and I worked out that seeking beyond the end of the file wasn't good, and I found a better script http://www.php.net/manual/en/function.fseek.php#90450 But what I'm left wondering is... why didn't something restart or kill the process? How was one hanging page able to bring down the whole server? It's running suphp. I say "out of the box", I've tweaked mysql and apache to fork and reserve sensible amounts of processes for the 512Mb RAM the VPS has, and it'll handle multiple refreshes of large pages, and hadn't hung before. Any ideas how I'd avoid this? Google isn't my friend in this instance beyond the reccs. above about number of processes vs RAM available.

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • many unknow process name as "sudo"

    - by joaner
    my server free memoney is less and less, And many process COMMAND are"sudo" when use top and enter M. I don't understand root user need to use "sudo". I want to know the way these processes are generated ? Can I kill ? Tasks: 185 total, 1 running, 184 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3967848k total, 3484196k used, 483652k free, 218532k buffers Swap: 4112376k total, 0k used, 4112376k free, 2932864k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22219 mysql 20 0 582m 67m 5492 S 0.0 1.7 0:01.75 mysqld 22337 daemon 20 0 327m 31m 3440 S 0.0 0.8 0:01.58 httpd 22252 daemon 20 0 321m 26m 3416 S 0.0 0.7 0:01.25 httpd 22263 daemon 20 0 319m 23m 3396 S 0.0 0.6 0:00.71 httpd 22253 daemon 20 0 310m 18m 3444 S 0.0 0.5 0:00.69 httpd 22251 root 20 0 28392 12m 3640 S 0.0 0.3 0:00.09 httpd 2422 root 20 0 9192 3608 2184 S 0.0 0.1 0:00.32 ssh 13613 root 20 0 38220 3572 1044 S 0.0 0.1 0:22.31 rsyslogd 2423 root 20 0 11556 3420 2692 S 0.0 0.1 0:00.11 sshd 22570 root 20 0 11716 3408 2676 S 0.0 0.1 0:00.08 sshd 3351 root 20 0 10384 2540 2000 S 0.0 0.1 0:00.06 sudo 30870 root 20 0 10384 2528 2000 S 0.0 0.1 0:00.06 sudo 14356 dkim-mil 20 0 49664 2444 1468 S 0.0 0.1 0:03.91 dkim-filter 2085 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 7741 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 29838 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 2006 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 29747 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 30602 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 30935 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 2259 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 2503 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 2515 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 7718 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 7745 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 29845 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30172 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30352 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30548 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30598 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30897 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30899 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • My VPS ubuntu server is very slow

    - by askmike
    I just installed a frech copy of Ubuntu 12.04 on my vps because my old installation was very slow, unfortunately this did not fix the problem. With slow I mean requests for my PHP websites take a long time, very slow (30 sec per request) to slow (3+ sec per request). When it's really bad SSH is also laggish. The websites are: askmike.org (pretty standard Wordpress) mvr.me (own PHP) slow? very slow: Here is a picture of loading a clean install of wordpress slow: here is a picture of loading a small PHP based website the vps The VPS has 256mb ram and an 25GB hdd. Besides serving the 2 small websites it isn't doing anything AFAIK. What have I installed Clean Ubuntu server 12.04 LAMP stack few things like git and nodejs (not using both) ossec (because I thought my server was getting hammered) munin What I already tried / done I installed munin so that I could watch io speed and such. The problem is that I don't know where to look for in the munin report. I checked logs and don't see anything strange (although I don't really know where to look for besides strange / repetitive errors and GET requests). I configured Apache MPM to: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 40 MaxRequestsPerChild 0 </IfModule> (apache is using prefork, the default) Stats I copied the munin report as it appeared at 4:50 last night to a site hosted on a shared webhost. Note that tonight my mysql crashed somewhere after 1:00 (which is a new problem altogether), so therefor the graph for last night might look strange. Can anyone help me get my VPS up to normal speed? EDIT: Thanks for the replies. The VPS is 10 bucks a month and is from directvps.nl (Dutch host and I'm also dutch). I did two speed tests for disk IO: $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 23.1506 s, 46.4 MB/s $ dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 1073741824 bytes (1.1 GB) copied, 39.3796 s, 27.3 MB/s Anyway: how can I prove to my VPS host that it is to slow? I can understand a server being busy slowing a website down. But 5-30 sec loadtime for a normal PHP webpage?

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • What else can I do to secure my Linux server?

    - by eric01
    I want to put a web application on my Linux server: I will first explain to you what the web app will do and then I will tell you what I did so far to secure my brand new Linux system. The app will be a classified ads website (like gumtree.co.uk) where users can sell their items, upload images, send to and receive emails from the admin. It will use SSL for some pages. I will need SSH. So far, what I did to secure my stock Ubuntu (latest version) is the following: NOTE: I probably did some things that will prevent the application from doing all its tasks, so please let me know of that. My machine's sole purpose will be hosting the website. (I put numbers as bullet points so you can refer to them more easily) 1) Firewall I installed Uncomplicated Firewall. Deny IN & OUT by default Rules: Allow IN & OUT: HTTP, IMAP, POP3, SMTP, SSH, UDP port 53 (DNS), UDP port 123 (SNTP), SSL, port 443 (the ones I didn't allow were FTP, NFS, Samba, VNC, CUPS) When I install MySQL & Apache, I will open up Port 3306 IN & OUT. 2) Secure the partition in /etc/fstab, I added the following line at the end: tmpfs /dev/shm tmpfs defaults,rw 0 0 Then in console: mount -o remount /dev/shm 3) Secure the kernel In the file /etc/sysctl.conf, there are a few different filters to uncomment. I didn't know which one was relevant to web app hosting. Which one should I activate? They are the following: A) Turn on Source Address Verification in all interfaces to prevent spoofing attacks B) Uncomment the next line to enable packet forwarding for IPv4 C) Uncomment the next line to enable packet forwarding for IPv6 D) Do no accept ICMP redirects (we are not a router) E) Accept ICMP redirects only for gateways listed in our default gateway list F) Do not send ICMP redirects G) Do not accept IP source route packets (we are not a router) H) Log Martian Packets 4) Configure the passwd file Replace "sh" by "false" for all accounts except user account and root. I also did it for the account called sshd. I am not sure whether it will prevent SSH connection (which I want to use) or if it's something else. 5) Configure the shadow file In the console: passwd -l to lock all accounts except user account. 6) Install rkhunter and chkrootkit 7) Install Bum Disabled those services: "High performance mail server", "unreadable (kerneloops)","unreadable (speech-dispatcher)","Restores DNS" (should this one stay on?) 8) Install Apparmor_profiles 9) Install clamav & freshclam (antivirus and update) What did I do wrong and what should I do more to secure this Linux machine? Thanks a lot in advance

    Read the article

  • Setup Remote Access in Windows Home Server

    - by Mysticgeek
    One of the many awesome features of Windows Home Server, is the ability to access your server and other computers on your network remotely. Today we show you the steps to enable Remote Access to your home server from anywhere you have an Internet connection. Remote Access in Windows Home Server has a lot of great features like uploading and downloading files from shared folders, accessing files from machines on your network, and controling machines remotely (on supported OS versions). Here we take a look at the basics of setting it up, choosing a domain name, and verifying you can connect remotely. Setup Remote Access in Windows Home Server Open the Windows Home Server Console and click on Settings. Next select Remote Access, it is off by default, just click the button to turn it on. Wait while your router is configured for remote access, when it’s complete click Next. Notice that it will enable UPnP, if you don’t wish to have that enabled, you can manually forward the correct ports. If you have any problems with the router being automatically configured, we’ll be taking a look at a more detailed troubleshooting guide in the future. The router is successfully configured, and we can continue to the next process of configuring our domain name. The Domain Name Setup Wizard will start. Notice you will need a Windows Live ID to set it up –which is typically your hotmail address. If you don’t already have one, you can get one here. Type in your Live ID email address and password and click Next… Agree to the Home Server Privacy Statement and the Live Custom Domains Addendum. If you’re concerned about privacy and want to learn more about the domain addendum, make sure to read about it before agreeing. There is nothing abnormal to point out about either statement, but if this is your first time setting it up, it’s good to review the information.   Now choose a name for the domain. You should select something that is easy to remember and identifies your home server. The name can contain up to 63 characters, numbers, letters, and hyphens…and must begin and end with a letter or number. When you have the name figured out click the Confirm button. Note: You can only register one domain name per Live ID. If the name isn’t already taken, you’ll get a confirmation message indicating it’s god to go. The wizard is complete and you can now access the home server from the URL provided. A few other things to point out after you’ve set it up…under Domain Name click on the Details button… Which pulls up the domain detail information and you can refresh the data to verify everything is working correctly. Or you can click the Configure button and then change or release your current domain name. Under Web site settings, you can change you site page headline to whatever you want it to be. Accessing Home Server Remotely After you’ve gotten everything setup for your home server domain, you can begin to access it when you’re away from home. Simply type in the domain address you created in the previous steps. The start page is rather boring…and to start accessing your data, click the Log On button in the upper right hand corner. Then enter in your home server credentials to gain access to your files, folders, and network computers. You won’t be able to log in with your administrator user account however, to protect security of your network. Once you’re logged in, you’ll be able to access different parts of your home server shares and network computers. Conclusion Now that you have Remote Access setup, you should be able to access and manage your files easily. Being able to access data from your home server remotely is great when you need to get certain files while on the road. The web UI is pretty self explanatory, works best in IE as ActiveX is required, and is smooth and easy to work with. In future articles we’ll be covering a lot more regarding remote access, including more of the available features, troubleshooting connection issues, and enabling access for other users. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerHow to Remote Desktop to the Actual Server Console on Windows 2003Use Windows Vista Aero through Remote Desktop ConnectionAccess Your MySQL Server Remotely Over SSHShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • Help Prevent Carpal Tunnel Problems with Workrave

    - by Matthew Guay
    Whether for work or leisure, many of us spend entirely too much time on the computer everyday.  This puts us at risk of having or aggravating Carpal Tunnel problems, but thanks to Workrave you can help to divert these problems. Workrave helps Carpal Tunnel problems by reminding you to get away from your computer periodically.  Breaking up your computer time with movement can help alleviate many computer and office related health problems.  Workrave helps by reminding you to take short pauses after several minutes of computer use, and longer breaks after continued use.  You can also use it to keep from using the computer for too much You time in a day.  Since you can change the settings to suit you, this can be a great way to make sure you’re getting the breaks you need. Install Workrave on Windows If you’re using Workrave on Windows, download (link below) and install it with the default settings. One installation setting you may wish to change is the startup.  By default Workrave will run automatically when you start your computer; if you don’t want this, you can simply uncheck the box and proceed with the installation. Once setup is finished, you can run Workrave directly from the installer. Or you can open it from your start menu by entering “workrave” in the search box. Install Workrave in Ubuntu If you wish to use it in Ubuntu, you can install it directly from the Ubuntu Software Center.  Click the Applications menu, and select Ubuntu Software Center. Enter “workrave” into the search box in the top right corner of the Software Center, and it will automatically find it.  Click the arrow to proceed to Workrave’s page. This will give you information about Workrave; simply click Install to install Workrave on your system. Enter your password when prompted. Workrave will automatically download and install.   When finished, you can find Workrave in your Applications menu under Universal Access. Using Workrave Workrave by default shows a small counter on your desktop, showing the length of time until your next Micro break (30 second break), Rest break (10 minute break), and max amount of computer usage for the day. When it’s time for a micro break, Workrave will popup a reminder on your desktop. If you continue working, it will disappear at the end of the timer.  If you stop, it will start a micro-break which will freeze most on-screen activities until the timer is over.  You can click Skip or Postpone if you do not want to take a break right then. After an hour of work, Workrave will give you a 10 minute rest break.  During this it will show you some exercises that can help eliminate eyestrain, muscle tension, and other problems from prolonged computer usage.  You can click through the exercises, or can skip or postpone the break if you wish.   Preferences You can change your Workrave preferences by right-clicking on its icon in your system tray and selecting Preferences. Here you can customize the time between your breaks, and the length of your breaks.  You can also change your daily computer usage limit, and can even turn off the postpone and skip buttons on notifications if you want to make sure you follow Workrave and take your rests! From the context menu, you can also choose Statistics.  This gives you an overview of how many breaks, prompts, and more were shown on a given day.  It also shows a total Overdue time, which is the total length of the breaks you skipped or postponed.  You can view your Workrave history as well by simply selecting a date on the calendar.   Additionally, the Activity tab in the Statics pane shows more info about your computer usage, including total mouse movement, mouse button clicks, and keystrokes. Conclusion Whether you’re suffering with Carpal Tunnel or trying to prevent it, Workrave is a great solution to help remind you to get away from your computer periodically and rest.  Of course, since you can simply postpone or skip the prompts, you’ve still got to make an effort to help your own health.  But it does give you a great way to remind yourself to get away from the computer, and especially for geeks, this may be something that we really need! Download Workrave Similar Articles Productive Geek Tips Switch to the Dvorak Keyboard Layout in XPAccess Your MySQL Server Remotely Over SSHHow to Secure Gaim Instant Messenger traffic at Work with SecureCRT and SSHConnect to VMware Server Console Over SSHDisclaimers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >