Search Results

Search found 5419 results on 217 pages for 'warning'.

Page 164/217 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Registry remotley hacked win 7 need help tracking the perp

    - by user577229
    I was writing some .VBS code at thhe office that would allow certain file extensions to be downloaded without a warning dialog on a w7x32 system. The system I was writing this on is in a lab on a segmented subnet. All web access is via a proxy server. The only means of accessing my machine is via the internet or from within the labs MSFT AD domain. While writing and testing my code I found a message of sorts. Upon refresing the registry to verify my code changed a dword, instead the message HELLO was written and visible in regedit where the dword value wass called for. I took a screen shot and proceeded to edit my code. This same weird behavior occurred last time I was writing registry code except on another internal server. I understand that remote registry access exists for windows systems. I will block this immediately once I return to the office. What I want to know is, can I trace who made this connection? How would I do this? I suspect the cause of this is the cause of other "odd" behaviors I'm experiencing at work such as losing control of my input director master control for over an hour and unchanged code that all of a sudden fails for no logical region. These failures occur at funny times, whenver I'm about to give a demonstration of my test code. I know this sounds crazy however knowledge of the registry component makes this believable. Once the registry can be accessed, the entire system is compromised. Any help or sanity checking is appreciated.

    Read the article

  • NRPE: Unable to read output with check_connections plugin

    - by Wlodzimierz
    I'm using plugin which gives me warning or crtis with established connections. If I run it on local machine it gives: *root@graber:/usr/lib/nagios/plugins# ./check_connections -w 1 -c 5 -C sshd CRITICAL Established connections: 6* I know, I run as root. But: Rights to the file: root@graber:/usr/lib/nagios/plugins# ls -all check_connections -rwxr-xr-x 1 nagios nagios 5459 2012-07-06 10:19 check_connections /etc/sudoers: root@graber:/usr/lib/nagios/plugins# cat /etc/sudoers Defaults env_reset root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/bin/lsof nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ /etc/nagios/nrpe.cfg: *nrpe_user=nagios nrpe_group=nagios* *dont_blame_nrpe=1* *command_prefix=/usr/bin/sudo command[check_connections]=/usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd* log from remote: *2012-07-06T11:12:49+02:00 graber nrpe[25928]: Handling the connection... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host address is in allowed_hosts 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host is asking for command 'check_connections' to be run... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Running command: /usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd 2012-07-06T11:19:11+02:00 graber nrpe[26100]: Return Code: 2, Output: NRPE: Unable to read output* Why is this happening? I'm out of ideas, I've searched google for 2 days now :)

    Read the article

  • arch openldap authentication failure

    - by nonus25
    I setup the openldap, all look fine but i cant setup authentication, #getent shadow | grep user user:*::::::: tuser:*::::::: tuser2:*::::::: #getent passwd | grep user git:!:999:999:git daemon user:/:/bin/bash user:x:10000:2000:Test User:/home/user/:/bin/zsh tuser:x:10000:2000:Test User:/home/user/:/bin/zsh tuser2:x:10002:2000:Test User:/home/tuser2/:/bin/zsh from root i can login as a one of these users #su - tuser2 su: warning: cannot change directory to /home/tuser2/: No such file or directory 10:24 tuser2@juliet:/root i cant login via ssh also passwd is not working #ldapwhoami -h 10.121.3.10 -D "uid=user,ou=People,dc=xcl,dc=ie" ldap_bind: Server is unwilling to perform (53) additional info: unauthenticated bind (DN with no password) disallowed 10:30 root@juliet:~ #ldapwhoami -h 10.121.3.10 -D "uid=user,ou=People,dc=xcl,dc=ie" -W Enter LDAP Password: ldap_bind: Invalid credentials (49) typed password by me is correct /etc/openldap/slapd.conf access to dn.base="" by * read access to dn.base="cn=Subschema" by * read access to * by self write by users read by anonymous read access to * by dn="uid=root,ou=Roles,dc=xcl,dc=ie" write by users read by anonymous auth access to attrs=userPassword,gecos,description,loginShell by self write access to attrs="userPassword" by dn="uid=root,ou=Roles,dc=xcl,dc=ie" write by anonymous auth by self write by * none access to * by dn="uid=root,ou=Roles,dc=xcl,dc=ie" write by dn="uid=achmiel,ou=People,dc=xcl,dc=ie" write by * search access to attrs=userPassword by self =w by anonymous auth access to * by self write by users read database hdb suffix "dc=xcl,dc=ie" rootdn "cn=root,dc=xcl,dc=ie" rootpw "{SSHA}AM14+..." there are some parts of that conf file /etc/openldap/ldap.conf looks : BASE dc=xcl,dc=ie URI ldap://192.168.10.156/ TLS_REQCERT allow TIMELIMIT 2 so my question is what i am missing that ldap not allow me login by using password ?

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Setting up a Network Bridge on Linux VM (Windows 7 Host)

    - by GrandAdmiral
    I would like to use NetEm to simulate a low bandwidth environment while testing an Internet-connected device. My plan is to setup a bridge in a Linux VM (Linux Mint 13) on a Windows 7 host. Unfortunately I'm having trouble setting up the bridge. Then I can use NetEm in the Linux VM to limit the bandwidth to an external device. I went with the following script: ifconfig eth0 0.0.0.0 promisc up ifconfig eth1 0.0.0.0 promisc up Then create the bridge and bring it up: brctl addbr br0 brctl setfd br0 0 brctl addif br0 eth0 brctl addif br0 eth1 dhclient br0 ifconfig br0 up When I run that script, I see the following warning: Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service smbd reload Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the reload(8) utility, e.g. reload smbd The device connecting to the bridge is able to obtain an IP Address, but it can only ping the IP Address of the bridge (both are 10.2.32.xx). Then after a few minutes, other parts of our network go down. I'm not sure why, but once I kill the bridge the network is fine. Is it possible to setup a network bridge in a Linux VM? Do I need to do something else with the dhclient br0 part of the script? By the way, I'm using VirtualBox. The wired connection is eth0 and the wireless connection is eth1. The wired connection is connecting to the device and the wireless connection is going to the network. Both adapters are set up as bridged adapters with promiscuous mode set to "allow all".

    Read the article

  • Production monitoring for EC2 instances

    - by Janine
    I'm setting up my first production instance on EC2 and want to make sure I have all necessary monitoring in place. There are three different types of things I want to monitor: Is the instance running? EC2 instances can be terminated without warning if the underlying hardware fails, and as far as I know they aren't automatically restarted. So if not, start it back up. Is UNIX running properly? This is the usual stuff about CPU load, disk space, etc. Is the website responding? If not, restart it. I initially set up Nagios on a physical server outside the cloud, but it is really only helpful for item 2. It can tell me if the instance is gone or if the website is not responding, but as far as I can tell it can't execute any commands to fix the situation. My Googling on this subject has yielded a plethora of options - Cacti, Monit, God, Ganglia, and probably more I'm forgetting now. I don't have time to research them all. I am aware of Amazon's Cloudwatch but it doesn't seem to do anything that my Nagios installation doesn't already do. If you already have something like this in place, can you please share what has worked well for you?

    Read the article

  • What can I do to prevent system power downs?

    - by Joe King
    Yesterday I was given my brother's old laptop - core i7, 2.67GHz, 8GB RAM, 128GB SSD, Win7 64 bit. It's a Sony Vaio Z11. Approx 18 months old. When running something computationally intensive, the fan starts up and after about 30 secs it just powers itself down with no warning. I guess it is overheating. There is nothing in the event logs to suggest what is causing it - the only thing I see is "the last system shutdown was unexpected" or something similar. This is a problem for me because I use a lot of number crunching apps, which pretty much makes it useless to me. I would like to know if there is anything I can do, other than the obvious things I've done already - open up and clean out dust, re-install the OS. According to my brother, this problem started about 6 months ago when it was already outside warranty. If it's just used for simple things - web browsing, word processing etc, the problem does not occur. Any ideas for what I can do to fix this ? Update: I found that the laptop has 2 hardware settings for graphics: Speed and Stamina - the Speed setting seems to use an nvidia GEforce GT 330M, while the Stamina setting uses an Intel chipset. With the setting on Speed, I can hear the fan the whole time, and the system powers down after a short while (5-10 mins) even just doing basic tasks (browsing this site for example), but doesn't shut down if I just leave it switched on. In this mode it also sometimes just freezes the screen and I have to power off myself. However on Stamina setting it only powers down when doing number crunching and never freezes the screen.

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • Installing mysqlnd for php 5.4.9 on CentOs 6.3

    - by kira423
    Okay let me get straight to the point, I am a complete noob, and have never done stuff like this at all, I have read tutorial after tuorial but I cant get anything to work. When I tried to install the rpm file I got this error rpm -Uvh ftp://ftp.pbone.net/mirror/rpms.famillecollet.com/enterprise/6/test/x86_64/php-mysqlnd-5.4.9-1.el6.remi.x86_64.rpm Retrieving ftp://ftp.pbone.net/mirror/rpms.famillecollet.com/enterprise/6/test/x86_64/php-mysqlnd-5.4.9-1.el6.remi.x86_64.rpm warning: /var/tmp/rpm-tmp.ez4vvd: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php-pdo(x86-64) = 5.4.9-1.el6.remi is needed by php-mysqlnd-5.4.9-1.el6.remi.x86_64 so I tried installing that rpm file and got this error rpm -ivh ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm Retrieving ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm curl: (9) Server denied you to change to the given directory error: skipping ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm - transfer failed I used the ftp links because I have no idea how else to get them to the server. I think I am getting overly frustrated with this, but I have to get this driver installed for any of my scripts to function correctly. Any help would be greatly appreciated!

    Read the article

  • Problems installing GIT on Ubuntu through SSH

    - by jamadri
    I'm having trouble installing git using this command: sudo apt-get install git-core It's giving me the problems below and I'm not quite sure how to get this to work correctly. I try running sudo apt-get update and after it just gives me problems. If anyone knows how to solve this or a possible way of getting GIT on your machine differently it would be of much help. I've never had a problem with using apt-get. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-core patch Install these packages without verification [y/N]? y Err http://us.archive.ubuntu.com jaunty/main git-core 1:1.6.0.4-1ubuntu2 404 Not Found [IP: 91.189.92.183 80] Err http://us.archive.ubuntu.com jaunty/main patch 2.5.9-5 404 Not Found [IP: 91.189.92.183 80] Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/g/git-core/git-core_1.6.0.4- 1ubuntu2_amd64.deb 404 Not Found [IP: 91.189.92.183 80] Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/p/patch/patch_2.5.9- 5_amd64.deb 404 Not Found [IP: 91.189.92.183 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Anything reply that can help fix this would be helpful. I'm not sure if it's the git servers or my connection that might be the problem. I've used apt-get to pull other things, it's just failing with git.

    Read the article

  • How do I create a read only MySQL user for backup purposes with mysqldump?

    - by stickmangumby
    I'm using the automysqlbackup script to dump my mysql databases, but I want to have a read-only user to do this with so that I'm not storing my root database password in a plaintext file. I've created a user like so: grant select, lock tables on *.* to 'username'@'localhost' identified by 'password'; When I run mysqldump (either through automysqlbackup or directly) I get the following warning: mysqldump: Got error: 1044: Access denied for user 'username'@'localhost' to database 'information_schema' when using LOCK TABLES Am I doing it wrong? Do I need additional grants for my readonly user? Or can only root lock the information_schema table? What's going on? Edit: GAH and now it works. I may not have run FLUSH PRIVILEGES previously. As an aside, how often does this occur automatically? Edit: No, it doesn't work. Running mysqldump -u username -p --all-databases > dump.sql manually doesn't generate an error, but doesn't dump information_schema. automysqlbackup does raise an error.

    Read the article

  • Change the Powershell $profile directory

    - by Swoogan
    I would like to know how to change my the location my $profile variable points to. PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 H:\ is a network share, so when I create my profile file, and load powershell I get the following: Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): According to Microsoft, the location of the $profile is determined by the %USERPROFILE% environment variable. This is not true: PS H:\> $env:userprofile C:\Users\username For example, I have an XP machine working how I want: PS H:\> $profile C:\Documents and Settings\username\My Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Documents and Settings\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Here's the same output from the Vista machine where the $profile points to the wrong place: PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Users\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Since $profile isn't actually determined by %USERPROFILE% how do I change it? Clearly anything that involves changing the homedrive or homepath is not the solution I'm looking for.

    Read the article

  • Postfix enable SSL 465 failed

    - by user221290
    I have installed the Postfix and enabled SSL/TLS, just tested, I can sent email from port 25, 578, but cannot sent email from port 465, the log is: May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server hello A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server done A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 flush data May 26 17:24:06 mail postfix/smtpd[28721]: SSL3 alert read:fatal:certificate unknown May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:failed in SSLv3 read client certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept error from unknown[10.155.36.240]: 0 May 26 17:24:06 mail postfix/smtpd[28721]: warning: TLS library problem: 28721:error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown:s3_pkt.c:1197:SSL alert number 46: May 26 17:24:06 mail postfix/smtpd[28721]: lost connection after CONNECT from unknown[10.155.36.240] May 26 17:24:06 mail postfix/smtpd[28721]: disconnect from unknown[10.155.36.240] My email server is: 10.155.34.117, and email client is: 10.155.36.240, the client error is: Could not connect to SMTP host: 10.155.34.117, port: 465. My Master.cf: smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes My main.cf: smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_key_file = /etc/pki/myca/mail.key smtpd_tls_cert_file = /etc/pki/myca/mail.crt smtpd_tls_CAfile = /etc/pki/myca/cacert_new.pem smtpd_tls_loglevel = 2 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_scache Seems it's my certificate issue, but I have tried to grant the file many times...I have no idea on this, please help!

    Read the article

  • NAS is intermittently inaccessible

    - by Natalie
    Model: QNAP TS-410 Turbo NAS Firmware version: 3.2.5 Build 0409T Issue: Each day, users connect to share folders on the NAS system and have read/write permissions for the share folders to which they need access. However, it often asks them for their log-in details and - when provided with right (or wrong) credentials for a user with read/write permissions - it denies them access. I've checked the logs and I keep seeing the following warnings: 2011-11-23 16:26:29 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:26:16 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:25:30 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:25:15 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:24:33 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:24:21 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:23:37 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:23:25 System 127.0.0.1 localhost Re-launch process [proftpd]. They seem to occur per minute but I am uncertain about whether or not they are relevant to this issue. The "Login failed" warning has also displayed in the system connection logs which tells me when and which user was unable to log in, as shown below: 2011-11-22 16:11:07 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 16:11:07 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 16:11:06 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 13:46:14 administrator 192.168.0.yy --- HTTP Administration Login Fail 2011-11-22 13:46:09 administrator 192.168.0.yy --- HTTP Administration Login Fail 2011-11-21 15:17:22 user 192.168.0.zz computer-02 SAMBA --- Login Fail 2011-11-21 15:17:18 user 192.168.0.zz computer-02 SAMBA --- Login Fail 2011-11-21 15:17:17 user 192.168.0.zz computer-02 SAMBA --- Login Fail I've researched this on Google and the QNAP forums and have not come up with a resolution as yet.

    Read the article

  • Nagios check_host_alive and check_ping not showing host as down

    - by Kyle
    I am using the check_host_alive command to send 5 packets every minute to all my routers at remote locations. I noticed today I received a notification from The AT&T Global Client Support Center that a router was down (which can take 5-30 minutes to send these notices out) and never received a notice from Nagios. I went onto Nagios and it is was showing the host as alive with a latency of 0ms. This tells me it is seeing the automated response from my router in the data center that, "TTL expired in transit" as a reply from the remote router. Is there anyway for me to tell nagios to check where the reply is comming from? I feel like other people have to of had this issue... I tested it with the check_ping command and it produced the same results. I have the command defined has %hostname% and the proper IP in the host definition, and it works fine for telling me the latency is high. Any ideas are welcome, I have already exercised my Google skills with no results. EDIT: root@IM-UBTU:/# /usr/local/nagios/libexec/check_ping -H 192.168.250.1 -w 100.0,10% -c 200.0,20% -vvv CMD: /bin/ping -n -U -w 10 -c 5 192.168.250.1 Output: PING 192.168.250.1 (192.168.250.1) 56(84) bytes of data. Output: From 10.69.10.2 icmp_seq=1 Time to live exceeded It knows something is wrong why doesn't it give me a warning?

    Read the article

  • PHP cannot connect to MySQL

    - by yogal
    Hello, I recently installed Apache 2 + PHP 5.3.1 + MySQL 5.1.44 on my Windows 7 64bit machine following this guide: http://sleeplessgeek.blogspot.com/2010/01/setting-up-apache-php-mysql-phpmyadmin.html It all went fine, php is working great (even with XDebug) but I cannot connect to mysql server. A simple script I wrote to test connection (yes, root has no pass): $username = "root"; $password = ""; $database = "test"; $hostname = "localhost"; $conn = mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL Database!!"); It prints this error after 60sec of timeout: Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. I can connect to mysql using cmdmysql -h localhost -u root Services are working properly. There also seems to be a problem with PhpMyAdmin (using 3.2.5). As soon as I type user and pass the page loads and turns blank (content-lenght in headers is 0 but status code is 302 Found). Looks like something wrong with cookies (my auth method). I hope someone has a clue, it has to be something dumb simple I missed. Thanks in advance.

    Read the article

  • How to handle sh: fetch: command not found

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps

    Read the article

  • Error 0x80073cf9 when installing or updating apps from windows store

    - by cmorse
    On my Windows 8 desktop I keep getting error 0x80073cf9 when I try to install or update an app from the windows store. In the installings apps pane it just says "This app wasn't installed -- view details" and when I select that it says "Something happened and this app couldn't be installed. Please try again. Error code: 0x80073cf9" I am using the built-in windows firewall and antivirus. And my laptop is able to install updates when it is on the same network. This is what winstore.log shows when I try to update the maps app: 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] *********************************************************************** 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Process name: C:\Windows\system32\taskhost.exe 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] User name: Desktop\User 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Computer name: desktop 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Windows build: 9200.16424.amd64fre.win8_gdr.120926-1855 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011628] Client version: 615 2012-10-18 15:31:47.328, _Info_ WS [00015160:00011428] CWSTileUpdateHandler::Worker: Broker is handling badge updates. 2012-10-18 15:31:47.554, _Info_ WS [00002572:00008200] CProgressDispatcher::OnProgress: AppId = 97a2179c-38be-45a3-933e-0d2dbf14a142, PFN = Microsoft.BingMaps_8wekyb3d8bbwe, InstallPhase = 1, PhasePercent = 0, TotalPercent = 0 2012-10-18 15:31:47.558, _Warning_ WS [00002572:00008200] CDownloadProgress::IDownloadCompletedCallback::Invoke: Download complete result 0x80073cf9 for Microsoft.BingMaps_8wekyb3d8bbwe 2012-10-18 15:31:47.559, _Error_ WS [00002572:00008200] CActionItem::_DoDownload: Download failed for 97a2179c-38be-45a3-933e-0d2dbf14a142, hr=0x80073cf9 2012-10-18 15:31:47.560, _Info_ WS [00002572:00008200] CActionItem::_DoDownload: Notifying progress handlers of download failure for 97a2179c-38be-45a3-933e-0d2dbf14a142, hr=0x80073cf9 2012-10-18 15:31:47.560, _Error_ WS [00002572:00008200] CProgressDispatcher::OnError: PFN = Microsoft.BingMaps_8wekyb3d8bbwe, InstallPhase = 1, hrError = 0x80073cf9

    Read the article

  • Hadoop is not able to find JAVA_HOME properly

    - by Shekhar
    I am trying to run hadoop my Ubuntu OS. I have set JAVA_HOME variable in ~/.bashrc file to /usr/lib/jvm/jdk1.7.0_01/ but when I run hadoop namenode -format command it fails with following errors : shekhar@ubuntu:/usr$ hadoop namenode -format Warning: $HADOOP_HOME is deprecated. /host/Shekhar/Softwares/hadoop-1.0.0/bin/hadoop: line 321: /usr/jdk1.7.0_01/bin/java: No such file or directory /host/Shekhar/Softwares/hadoop-1.0.0/bin/hadoop: line 387: /usr/jdk1.7.0_01/bin/java: No such file or directory hadoop tries to locate java command at /usr/jdk1.7.0_01/bin/ path. Clearly somehow it missed /lib/jvm folder. I am not able to understand why and how this is happening. my echo $PATH command gives following output : shekhar@ubuntu:/usr$ echo $PATH /usr/lib/jvm/jdk1.7.0_01/bin:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/lib/jvm/jdk1.7.0_01/bin:/host/Shekhar/Softwares/hadoop-1.0.0/bin If I run which java command I get following output : shekhar@ubuntu:/usr$ which java /usr/lib/jvm/jdk1.7.0_01/bin/java and echo $JAVA_HOME returns following output : shekhar@ubuntu:/usr$ echo $JAVA_HOME /usr/lib/jvm/jdk1.7.0_01 I would like to know why hadoop is taking JAVA_HOME path incorrectly. Please help...

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • Windows 7 notebook turn off by itself, how to check if it is due to CPU being too hot?

    - by Jian Lin
    I have a Dell Studio 15 notebook, and it just started turning off by itself yesterday. Could it be that the CPU is too hot? I have had several notebooks before and every one of them I can put them on the bed without any problem. This Dell Studio Notebook, however, seems like have the air / fan outlet pointed outward from the bottom back of the notebook, so I suspect that the air is partially blocked when it is on the bed. Are there Win 7 tools that can monitor the CPU temperature, or will some 3rd party tool be needed? (I try to stick to official tools nowadays). Also, it is running Win 7 Ulitmate, there is actually no utility or background service from Win 7 or from Dell that detects when the temperature is too hot (or 95% near the max), pop out a message box giving a warning and say that the computer will go into sleep mode in 1 minute, but instead just turn off the computer by brute force (cutting out the power) right then and there? Update: it turned off right in front of my eyes -- it is not doing any windows update or anything. just normal use and jooooop, it turned off.

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >