Search Results

Search found 9156 results on 367 pages for 'partial match'.

Page 256/367 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • SSH from ubuntu server to Windows 2008 repeatedly asks for password

    - by jrizos
    I am trying to setup GIT using SSH mode. The central GIT repository is on a NAS device running Windows 2008 server and the user GIT repository is on ubuntu 12.04. When I try to SSH to the windows machine however I am not able to successfully get in. SSH keays are not setup but I think the problem is even before that since I cant get in just by providing the correct password. The output from the SSH command is below. Any help would be appreciated. dba@clpserv01:~$ ssh -v -l administrator clpnas OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to clpnas [***.***.***.***] port 22. debug1: Connection established. debug1: identity file /home/dba/.ssh/id_rsa type -1 debug1: identity file /home/dba/.ssh/id_rsa-cert type -1 debug1: identity file /home/dba/.ssh/id_dsa type -1 debug1: identity file /home/dba/.ssh/id_dsa-cert type -1 debug1: identity file /home/dba/.ssh/id_ecdsa type -1 debug1: identity file /home/dba/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze2 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA bd:37:d1:98:51:2a:d6:b5:f5:c7:98:d8:74:2c:4e:cd debug1: Host 'clpnas' is known and matches the RSA host key. debug1: Found key in /home/dba/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /home/dba/.ssh/id_rsa debug1: Trying private key: /home/dba/.ssh/id_dsa debug1: Trying private key: /home/dba/.ssh/id_ecdsa debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,password,keyboard-interactive Password:

    Read the article

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • ssh - "Connection closed by xxx.xxx.xxx.xxx" - using password

    - by Michael B
    I attempted to create an new user account that I wish to use to log in using ssh. I did this (in CentOs): /usr/sbin/adduser -d /home/testaccount -s /bin/bash user passwd testaccount This is the error I receive when trying to log in via ssh: ~/.ssh$ ssh -v [email protected] OpenSSH_5.1p1 Debian-5ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file /home/user/.ssh/identity type -1 debug1: identity file /home/user/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/user/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'xxx.xxx.xxx.xxx' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information No credentials cache found debug1: Unspecified GSS failure. Minor code may provide more information No credentials cache found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering public key: /home/user/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /home/user/.ssh/identity debug1: Trying private key: /home/user/.ssh/id_dsa debug1: Next authentication method: password testaccount@xxx's password: Connection closed by xxx.xxx.xxx.xxx The "connection closed" message appeared immediately after entering the password (if I enter the wrong password it waits and then prompts for another password) I am able to log in from the same computer using other accounts that had been setup previously. When logged into the remote machine I am able to do 'su testaccount' Thanks for your time.

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • RAID5 issue after replacing motherboard and upgrading firmware

    - by 8steve8
    ok so ive had a 4x2TB(samsung HD204UI w/firmware patch) raid5 array working normally for about a month. It was in a h57 gigabyte motherboard using the intel raid with Windows 7 x64. Today I got an intel h67 motherboard, so I upgraded the intel raid drivers to 10.1.0.1008 from 9.6.0.1014, and I'm not sure if i checked after a reboot, but it caused no problems. I swapped in the new dh67 motherboard, and my array status was "failed". 2 of the 4 drives listed themselves as members, while the other two drives listed themselves as non-members. I tried going back to the old h57 mobo, and downgrading the raid drivers, but the issue remains. It's not port dependent, 2 of the drives always come up as non-members regardless of what port or motherboard they are plugged into. This screenshot should show that the SNs match, which raises the question why the software doesn't realize the drive is a member of the array. I'd like to know if anyone has experienced anything similar, and what should I do, can I force the drive to be recognized as a member (without wiping data)?

    Read the article

  • Sharepoint: authenticating users via forms authentication

    - by sbee
    My problem is the following(sharepoint Newbie) , i want to change the default zone from being a Windows Authenticated Zone to a Forms Authenticated Zone ,thereby forcing the site collection administrator to log in via forms authentication and not windows also the sharepoint users will be accesing the site internally my goal is to effectively replace windows authentication with forms authentication as my company does not have active directory installed. So far i have created an ASP Application that adds the users to the database,the database was created via the .Net Framework Asp tool(Asp reg_sql),however when i change the default zone to the AspNetSqlMembershipProvider(Forms) and attempt to add my site collection administrator via the Central admistrator, i get the following error "No Exact Match found" as shown on the screenshot. My inkling is that somehow the people picker is failing to read the users from the database but reasearch on correcting that thus far has proved fruitless. I have made all the relevant changes on the these sites(Central admin site,My test site & Add Users site) config files.Changes are the following(Membeship Provider,Connection String,People Picker) i left out the role provider for now as it is optional. Help on this would ge highly appreciated...

    Read the article

  • Virtualhost one https site, the rest http

    - by RJP1
    I have a linode server with Apache2 running a handful of sites with virtualhosting. All sites work fine on port 80, but one site has a ssl certificate and also runs okay. My problem is as follows: The non-https sites, if visiting https://domain.com - show the contents of the only secure site... Is there a way of disabling the *:443 match for these non-secure sites? Thanks! EDIT (more information): Here's a typical config in sites-available for a normal insecure http site: <VirtualHost *:80> ServerName www.insecure.com ServerAlias insecure.com ... </VirtualHost> The secure https site is as follows: <VirtualHost *:80> ServerName www.secure.com Redirect permanent / https://secure.com/ </VirtualHost> <VirtualHost *:80> ServerName secure.com RedirectMatch permanent ^/(.*) https://secure.com/$1 </VirtualHost> <VirtualHost *:443> SSLEngine on SSLProtocol all SSLCertificateChainFile ... SSLCertificateFile ... SSLCertificateKeyFile ... SSLCACertificateFile ... ServerName secure.com ServerAlias secure.com ... </VirtualHost> So, visiting: http:/insecure.com - works http:/www.insecure.com - works http:/secure.com - redirects to https:/secure.com - works http:/www.secure.com - redirects to https:/secure.com - works https:/insecure.com - shows https:/secure.com - WRONG!

    Read the article

  • Recieving a 500 Internal Server error after login success?

    - by Jeremy Quick
    I created my first member login form which takes the typical username and password and then sends it to the code below: checklogin.php: mysql_connect($db_host, $db_username, $db_password) or die(mysql_error()); mysql_select_db($db_database) or die(mysql_error()); $username=$_POST['username']; $password=$_POST['password']; $username = stripslashes($username); $password = stripslashes($password); $username = mysql_real_escape_string($username); $password = mysql_real_escape_string($password); $sql="SELECT * FROM users WHERE username='$username' and password='$password'"; $result=mysql_query($sql); $count=mysql_num_rows($result); if($count == 1) //ERROR APPEARS TO TAKE PLACE HERE { session_start(); $_SESSION['username'] = $username; $_SESSION['password'] = $password; header('login_success.php'); } else { header("location:login_fail.php"); } If I type in the wrong information everything works properly so I know the error appears to be taking effect in the marked if statement. I have been searching the internet now looking for solutions but none seem to match mine or I am overlooking them. I've made a few changes which brought me to this point, before I was receiving deprecation warnings. Also, I have checked the logs and they are empty of errors relating to this.

    Read the article

  • How do I recover drivers from other hard disk

    - by Carl
    The drivers for a Cardbus (PCMCIA) card that gives me 2 USB 2.0 ports are on the hard disk from my old laptop. I have lost the driver CD. I have a way to get files from that other hard disk. Which files do I need? The drivers for the card used to be on the following website - the information is still there, except the download links don't work: http://www.ht-link.com/en/DownView.asp?ID=10 - The drivers I need are the first listing - The Win XP drivers for the HT-112NEC. My e-mails to them have not been answered. The information on this card is here http://www.ht-link.com/en/ProductView.asp?ID=106 I already tried connecting that other drive to my new laptop (via USB) and adding the drive to the search criteria when selecting update driver in the Device Manager. It says there isn't a better match, and if I select manual the matching device is not listed. (I don't think "manual" sees drivers on the external hard disk - but only ones on the main drive and/or found listed in the registry.) I would try 'have disk' if I knew exactly what file to point to on the external drive. The drivers are on that hard disk - I installed them there, and used that card on that computer. The new laptop has Windows XP Pro SP3, the old one had Pro SP2 Thanks for any help.

    Read the article

  • file read performance degrades as number of files increases

    - by bfallik-bamboom
    We're observing poor file read IO results that we'd like to better understand. We can use fio to write 100 files with a sustained aggregate throughput of ~700MB/s. When we switch the test to read instead of write, the aggregate throughput is only ~55MB/s. The drop seems related to the number of files since the throughput for read and write are comparable for a single file then diverge proportionally as we increase the number of files. The test server has 24 CPU cores, 48GB of memory, and is running CentOS 6.0. The disk hardware is a RAID 6 array with 12 disks and a Dell H800 controller. This device is partitioned with ext4 using the default settings. Increasing the readahead (using blockdev) improves the read throughput significantly but it still doesn't match write speed. For instance, increasing the readahead from 128KB to 1M improved the read throughput to ~145MB/s. Is this a known performance issue in our OS/disk/filesystem configuration? If so, how can we tell? If not, what tools or tests can we use to further isolate the issue? Thanks.

    Read the article

  • Is there any reason this cronjob would fail in cron, but not on the command line?

    - by Treffynnon
    I have written a little one liner that will email me when a list of files changes - I used sha512 to generate a list of hashes and then periodically check that those hashes still match. */5 * * * * /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] It works fine on the command line with: /usr/bin/sha512sum --status -c /sha512.sumlist && echo "Success" > /dev/null || echo "Check robots.txt and index.html in /var/www as staging sites are now potentially exposed to the world and the damned googlebot" | /usr/bin/mail -s "Default staging server files have changed" [email protected] As soon as I run it as a cronjob though it emails every time it runs with the failure message instead of only when the sha512sum check should fail. Is there something silly I have missed in a rush? I forgot to mention that I am running an Ubuntu machine.

    Read the article

  • Apache with multiple domains, single IP, VirtualHost is catching the wrong traffic

    - by apuschak
    I have a SOAP web service I am providing on a apache web server. There are 6 different clients (IPs) that request data and 3 of them are hitting the wrong domain. I am trying to find a way to log which domain name the requests are coming from. Details: ServerA is the primary ServerB is the backup domain1.com - the domain the web service is on domain2.com - a seperate domain that server seperate content on ServerB ServerA is standalone for now with its own IP and DNS from domain1.com. This works for everyone. ServerB is a backup for the web service, but it already hosts domain2.com. I added entries into the apache configuration file like: <VirtualHost *:443> ServerName domain2.com DocumentRoot /var/www/html/ CustomLog logs/access_log_domain2443 common ErrorLog logs/ssl_error_log_domain2443 LogLevel debug SSLEngine on ... etc SSL directives ... </VirtualHost> I have these for both 80 and 443 for domain1 and domain2 with domain1 being second. The problem is when we switch DNS for domain1 from ServerA to ServerB, 3 out of the 6 clients show up in the debug logs as hitting domain2.com instead of domain1.com and fail their web service request because domain2.com is first in the apache configuration file and catching all requests that don't match other virtualhosts, namely domain1.com. I don't know if they are hitting www.domain1.com, domain1.com (although I added entries for both) or using the external IP address or something else. Is there a way to see which URL they are hitting not just the page request or someother way to see why the first domain is catching traffic meant for the second listed domain? In the meantime, I've put domain1.com higher in the apache configuration than domain2.com. Now it catches the requests for all clients and works, however I don't know what it is catching and would like to make domain2.com the first entry again with a correct entry for domain1.com, for however they are hitting it. Thank you for your help! Andrew

    Read the article

  • How can I have Vhosts with Lighttpd on Windows and keeping PHP through mod_cgi ?

    - by Pixelastic
    Hello, I installed Lighty on Windows 7 and managed to get it correctly serve both static and PHP files (through mod_cgi). At first I got the "No input file selected" message displayed when requesting a .php file. So, I updated the doc_root value in my php.ini to match the server.document-root defined in my Lighty config, and PHP stops complaining. Then I defined a VHost to point all foo.com requests to a specific dir. It worked well for all static files but when requesting a .php file, the mod_cgi was still picking files from the doc_root defined in php.ini, not in the directory I defined for server.document-root in my Vhost. I know its what's supposed to happen, PHP follows the config defined in php.ini. And I have to set this value in my php.ini otherwise no php is processed at all. What I don't understand is how I'm supposed to have virtual hosts with mod_cgi enabled here ? I tried adding [HOST=foo.com] section in the php.ini without any luck. I tried mod_fastcgi but could'n get it to work at all, I also tried mod_simple_host but could get it handle php. I managed to get it working by copying my PHP install to another dir (and changing the doc_root value) and adding a cgi.assign pointing to that install in my vhost. But this is a really hackish way, it means having one PHP install for each virtualhost. Note that I'm working on a development machine running Windows, this is not a production server, I just wanted to emulate the final Server config locally to test some changes. I googled a lot this problem but all I can find are people installing Lighty on windows with mod_cgi, or installing Lighty on Windows with virtual hosts, but I never found anyone who managed to get both.

    Read the article

  • Windows 7 product key, which is the valid one - in registry or on a sticker?

    - by me how
    I am not too familiar with the software licensing and how this all works, but I have a question regarding Windows 7 and partially Office - generally Microsoft products. I have been asked to assist our IT guy who wants to collect all the product IDs for Windows 7 and Office. I haven't been given much details how to go about it and how to collect it. After a bit of research I have decided to use a freeware that pulls the software licenses out of the registry. I thought that was the easiest and would provide the most accurate product IDs. I've used Belrac Avisor to obtain all the informations. It turns out that about 25 machines use the same product key. I have asked if the company has bought a commercial license or something but there isn't anyone available at the moment who could answer my question. I have told the IT guy that there are 25 machines using the same product key and asked if that is alright. He told me to go around and write the product keys from the sticker(label) on each machine. I am just not quite sure if that's the right approach specially that the numbers do not match.... So, now I see that the numbers aren't matching and my question is in terms of software licensing which is the VALID and correct product key to provide if ever questioned about software license? Is it the number on the sticker or is it the number stored in the registry?

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Live Messenger SimilarityTable2 file

    - by adrianbanks
    I am trying to free up some space on my laptop's hard disk and am using a tool (SpaceMonger) that will show me a treemap of the whole disk. The problem I have comes from Live Messenger's SimilarityTable2 file. I have no idea what it is for, but I know that it is a sparse file, meaning that it shows as taking up 8GB of disk space, but actually only takes up 132KB of space on disk. The problem is that because SpaceMonger thinks this file is 8GB, it swamps the other files and takes up most of the treemap, making it hard to see the other files that really are large. Is this file safe to delete? If not, how do I make its actual size on disk match its reserved size? If that's not possible, how can I make SpaceMonger (or another treemap tool) use the real size of the file and not the reserved size? EDIT: I've just realised that I have some NTFS junctions set up, meaning that the same set of directories appear twice. Is there any way to stop this happening as well?

    Read the article

  • Are there tools available for trimming PDF margins?

    - by Charles Duffy
    I have an ebook I'm trying to read in PDF format on a Kindle. Unfortunately, the page headers and footers have some content (page number and copyright info, respectively) preventing the device from scaling the actual text to match its usable area viewing area, thus leaving the actual content too small to read. Various tools are available which will trim off whitespace, but the Kindle already does this; my goal, by contrast, is to remove printed matter outside of a defined bounding box, and the only tool I've found for the purpose is moderately expensive commercial software. I could probably generate a mask in Inkscape; split out the individual pages using pdftk, apply the mask to each page individually (outputting to postscript), and recombine the numerous postscript files into a single PDF. However, this decode/reencode steps would be pretty unfortunate in terms of document size; something able to operate with a bit more finesse would be ideal. I have all major operating systems handy (Windows, several modern Linux distros, a Mac, etc) so solutions don't need to be constrained by platform. Suggestions? (I've reported the issue to the author, who mentioned it to his editor, who hasn't done anything about the issue over the course of more than a month, making the zero-work approach evidently nonproductive).

    Read the article

  • screen scraper templates for various websites

    - by intuited
    I'm looking specifically for a convenient way to locally archive posts from this and other similar sites. I'd like to separate the question itself from the answers, or maybe crop the question and store it, keeping the page title. Obviously I don't need to store the menu or the various other site interface chrome. The best way to do this would seem to be to associate an XSLT template with a match on the URL and use that template to pull the various relevant informations and format them. My two-part question: Is there a tool specifically built for this task? I.E. something that takes a URL and checks it against a map of path-matching expressions to templates, and outputs the result of applying the template to that resource? xmlto seems to be most of the way there, and could probably just be called from a script that does the pattern-matching, but something already integrated would be more convenient. Is such a URL_pattern-to-XSLT_template map publicly available somewhere? Question 2.5: Is it legal to do this with sites like this one that have public licenses on their content?

    Read the article

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

  • Exchange 2010 SP2 database size

    - by Chad
    I have a single Exchange 2010 sp2 environment with 3 DB stores. I am trying to reduce the sizes by moving the mailboxes to a spare DB and then deleting the empty database. I cleaned up the users mailboxes to reduce the sizes and set the retention periods to 1 day each and waited several days before moving mailboxes. The databases are backing up fine and clearing logs files but when I move the mailboxes I noticed they were taking a long time, even though some were less than 100MB. When I checked the new database size it seems like the orginal mailbox size might be moving (1GB instead of 100MB). Exchange is showing the expected smaller mailbox sizes when I run get-mailbox statistics against the DB. So if I have 5 mailboxes 100MB each it is showing like 3GB instead of around 500MB, and no whitespace. I keep waiting thinking mailby the retention period is not expired yet but it is much longer than 1 day already. I am setting them both to 0 today to see if that works. What am I missing to get the combined mailbox sizes to match the DB size minus whitespace?

    Read the article

  • Why doesn't the value in /proc/meminfo seem to map exactly to the system RAM?

    - by Eric Asberry
    The values in /proc/meminfo for MemTotal don't make sense. As a human, eyeballing it, it seems to roughly correspond to the installed RAM, but for using it to display the installed RAM from an automated utility it appears to be inexact, and inconsistent. For a system with 1G of RAM, I would expect the MemTotal line to have a value of 1048576 - 1024*1024. But instead, I'm seeing 1029392. On another 4G box, I'm seeing 3870172, which is not a multiple of 1024, and it's not even close to 1029392*4. On an 8G box, I get 8128204, which again seems to have no correlation to the other values, nor is it a multiple of 1024. I'm trying to use this information to report the RAM on a status web page. My work-around is to just "round" it to the nearest 1G multiple, but I'd like to understand why these values seem inconsistent and don't match my expectations. Can somebody fill me in on what I'm missing here? EDIT: To expand on the accepted answer below.... The reference can be found here. Also of interest to me from that page, which explains the inconsistency, is this bit: meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options. ...

    Read the article

  • Limited bandwidth and transfer rates per user.

    - by Cx03
    I searched for a while but couldn't find anything concrete, hopefully someone can help me. I'm going to be running a Debian server on a gigabit port, and want to give each user his/her fair share of internet access. The first objective is easy - transfer rates (speed) per user. From what I've looked at, IPTables/Shorewall could do the job easy. Is this easy to setup, or could one of you point me at a config? I was hoping to limit users at 300mbit or 650mbit each. The second objective gets complicated. Due to the usage of the boxes, most of the traffic will be internal network traffic that does NOT get counted to the quota. However, I still need to limit the external traffic, and if they go over, cut off access (or throttle traffic to a very low speed (10mbit?)). Let's say the user has a 3TB external traffic limit. The IF part is: If the hostname they are exchanging the traffic with DOES NOT MATCH .ovh. or .kimsufi. (company owns multiple TLDs), count to the quota. Once said quota exceeds 3TB, choke them. Where could I find a system to count that for me? It would also need to reset or be able to be manually reset on a monthly basis. Thanks ahead of time!

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • Locate devices within a building

    - by ams0
    The situation: Our company is spread between two floors in a building. Every employee has a laptop (macbook Air or MacbookPro) and an iPhone. We have static DHCP mappings and DNS resolution so every mobile gets a name like employeeiphone.example.com, every macbook air gets a employeelaptop.example.com and every macbook pro gets a employeelaptop.example.com on the Ethernet interface (the wifi gets a dynamic IP from a small range dedicated for the purpose). We know each and every MAC address of phones and laptops, since we do DHCP static mapping (ISC DHCP server runs on linux). At each floor we have a Netgear stack of two switches, connected via 10GB fiber to each other. No VLANs so far. At every floor there are 4 Airport Extreme making a single SSID network with WPA2 authentication. The request: Our CTO wants to know who is present at which floor. My solution (so far): Every switch contains an table listing MAC address and originating port. On each switch stack, all the MAC addresses coming from the other floor are listed as coming on port 48 (the fiber link). So I came up with: 1) Get the table from each switch via SNMP 2) Filter out the ones associated with port 48 3) Grep dhcpd.conf, removing all entries not *laptop and not *iphone 4) Match the two lists for each switch, output in JSON or XML 5) present the results on a dashboard for all to see I wrote it in bash with a lot of awk and sed, it kinda works but I always have for some reason stale entries in the switch lookup tables, making it unreliable; some people may have put their laptop to sleep, their iphones drop connections after a while, if not woken up and so on..I searched left and right, we are prepared to spend a little on the project too (RFIDs?), does anybody do something similar? I can provide with the script if needed (although it's really specific to our switches and naming scheme). Thanks! p.s. perhaps is this a question for stackoverflow? please move if it so.

    Read the article

  • apache/httpd responds slower under EL6.1 than EL5.6 (centos)

    - by daniel
    I've read through other threads on performance differences between RHEL6 and RHEL5, but none seem a tight match to mine. My issue manifests itself in slightly slower average response time (20ms) per request. I have about 10/10 servers of the same hardware spec with Cent6.1 and Cent5.6. The issue is consistent across the group. I am running Ruby on Rails with Passenger. Apache config is identical (checked out from the same SVN repo) Ruby and Passenger are identical builds. Application is identical and being served traffic round robin. mod_worker An interesting clue from server-status: The Cent6.1 servers have a steady 20-40 threads in the "Reading Request" state while the Cent5.6 servers have around 1. I'm graphing this so I can see it trend over time. I also have a bunch of much newer machines that are significantly faster and are running Cent6.1. They dust all the older machines in response time, but I can see they also have a steady 20-40 threads in the "Reading Request" state. This makes me believe I can get their response time down, if I can figure out what is holding up these requests. My gut is telling me that I need to tune some network setting in sysctl, but I haven't figured it out yet. Help is appreciated.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >