Search Results

Search found 23783 results on 952 pages for 'edit and continue'.

Page 688/952 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • MediaWiki Math Extension Out of Sync

    - by kfmfe04
    I am trying to run MediaWiki 1.19.2 on Ubuntu Linux (3.0.0-26-generic #42-Ubuntu AMD 64). It installs fine via tar.gz, however, I run into problems when trying to install the Math extension (via git clone from its repository). I get the following error in the Apache logs: PHP Fatal error: Call to undefined method FSFileBackend::getRootStoragePath() in /var/www/abc.com/html/mediawiki-1.19.2/extensions/Math/Math.body.php on line 361, referer: http://intra.abc.com/wiki/index.php?title=Quality&action=submit so apparently, the latest Math extension is out-of-sync with MediaWiki 1.19.2; the latest Math extension is apparently using post-1.19.2 methods. I tried looking through the git logs for some indication (a tag, perhaps?) of which revision of Math worked with 1.19.2, but no luck. Anyone have any suggestions? Thanks in advance. Edit: Just my luck - after posting this message, I found this link. I think this version should do the job. Consider this question closed/answered.

    Read the article

  • Installing MySQL on Ubuntu Natty with Shell Script

    - by Obi Hill
    I'm trying to install MySQL on Ubuntu Natty from a shell script. However, I keep running into one major issue: when I try to define the password outside of the shell script. Below is the code to my shell script (which I have saved in /etc/init.d/install_mysql: export DEBIAN_FRONTEND=noninteractive echo mysql-server-5.1 mysql-server/root_password password $dbpass | debconf-set-selections echo mysql-server-5.1 mysql-server/root_password_again password $dbpass | debconf-set-selections apt-get -y install mysql-server So what I enter in the terminal is: dbpass="mysqlpass" chmod +x /etc/init.d/install_mysql /etc/init.d/install_mysql MySQL installs, but it installs without a password, so I can just do something like mysql -uroot to access mysql (which I don't want). The funny thing is if I put the password in the shell script as regular text, it works ok. So if I my install script is as follows, everything works (i.e. I must specify a password to access mysql): export DEBIAN_FRONTEND=noninteractive echo mysql-server-5.1 mysql-server/root_password password mysqlpass | debconf-set-selections echo mysql-server-5.1 mysql-server/root_password_again password mysqlpass | debconf-set-selections apt-get -y install mysql-server Is there a way I can use a shell script variable to define my password in the shell script, instead of entering the password literally?! Thanks in advance. EDIT I've found the answer to this. The following is what I should have entered in the terminal: dbpass="mysqlpass" export dbpass chmod +x /etc/init.d/install_mysql /etc/init.d/install_mysql It works like a charm now.

    Read the article

  • Ripping a home video VCD on Linux or Windows with VLC or otherwise

    - by user259774
    I have a VCD with 22 minutes of video on it. I would like to retain this footage and throw away the VCD. I can play the whole thing with VLC ("open disc - vcd - /dev/sr0 - play"): all 22 minutes of the main track. I don't believe there's any other content aside from the main track. I can seek to anywhere I want to within the 22 minute track. If I mount /dev/sr0 /media/vcd and then try to copy the only file from the MPEGAV folder, I get an I/O error, with an empty destination file. VLC has a "convert" option in addition to "play". When I use this I actually get a good OGG file back, after it runs through the video in painful real-time. I guess it dubs it frame-by-frame. But the file is only 10 minutes long, leaving 12 minutes off of the track. Handbrake doesn't detect it's track titles, unfortunately. I don't know if I should start getting involved with GNU ddrescue or if it's because VCDs somehow encode their data sectors differently. Anyway, I'm in way over my head and if anyone knows how I could get that video track off the thing, feel free to share! Edit: I should note that I also have access to a Windows computer

    Read the article

  • Whats the difference between local and remote addresses in 2008 firewall address

    - by Ian
    In the firewall advanced security manager/Inbound rules/rule property/scope tab you have two sections to specify local ip addresses and remote ip addresses. What makes an address qualify as a local or remote address and what difference does it make? This question is pretty obvious with a normal setup, but now that I'm setting up a remote virtualized server I'm not quite sure. What I've got is a physical host with two interfaces. The physical host uses interface 1 with a public IP. The virtualized machine is connected interface 2 with a public ip. I have a virtual subnet between the two - 192.168.123.0 When editing the firewall rule, if I place 192.168.123.0/24 in the local ip address area or remote ip address area what does windows do differently? Does it do anything differently? The reason I ask this is that I'm having problems getting the domain communication working between the two with the firewall active. I have plenty of experience with firewalls so I know what I want to do, but the logic of what is going on here escapes me and these rules are tedious to have to edit one by one. Ian

    Read the article

  • How to redirect all Internet traffic to OpenVPN Server

    - by JuliaS
    I have seen working solutions around the issue of forcing Internet traffic to go through the OpenVPN server but they are all done in Linux, all I want to know is how to add an entry to the route table in windows to make this happen. connectivity between the client and server is fine, my Windows 7 client can establish a connection to the Windows 2008 Server, but when established Internet traffic is still going from the local Windows 7 machine. Here are the details: Server: Windows 2008 Server with one NIC OpenVPN IP Address: 192.168.0.1 Local NIC IP Address (connects the server to the Internet): 10.242.69.107 Client: Windows 7 with one NIC OpenVPN IP Address: 192.168.0.2 ISP allocated IP Address: 10.0.8.2 (gateway 10.0.8.1) Server OpenVPN Config: dev tun ifconfig 192.168.0.1 192.168.0.2 secret static.key push "redirect-gateway def1" Client OpenVPN Config: remote xxx.xxx.com dev tun ifconfig 192.168.0.2 192.168.0.1 secret static.key I'm not an expert with adding routes...etc. I would be grateful if someone could let me know how to add this entry in my server/client route table. EDIT: Output from the client's netstat -rnv IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.8.1 10.0.8.2 20 10.0.8.0 255.255.255.252 On-link 10.0.8.2 276 10.0.8.2 255.255.255.255 On-link 10.0.8.2 276 10.0.8.3 255.255.255.255 On-link 10.0.8.2 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.252 On-link 192.168.0.2 286 192.168.0.2 255.255.255.255 On-link 192.168.0.2 286 192.168.0.3 255.255.255.255 On-link 192.168.0.2 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.8.2 276 224.0.0.0 240.0.0.0 On-link 192.168.0.2 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.8.2 276 255.255.255.255 255.255.255.255 On-link 192.168.0.2 286 ===========================================================================

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • Blank desktop when logging into a Virtualized Windows 2008 Terminal Server?

    - by Rachel
    We have a Virtualized Terminal Server running Windows Server 2008. When the admin user logs in, everything is fine. When anyone else logs in, their desktop and start menu is blank (they have the taskbar, start button, and quick launch links though). If I go into Windows Explorer, I can see icons in their desktop folder (although the icon image is missing and it is just displaying the generic icon), but can't run any of them. If I login with a user that is part of the Administrator group in Active directory, I get the same behavior except I can launch the programs found in the Desktop Folder of Windows Explorer. I cannot drag these items out onto the desktop though - The cursor doesn't allow me to drop them. From Task Manager I can see that explorer.exe and dwm.exe are both running. The Authenticated Users and Interactive groups are both under the Users group, along with our network's Domain Users group. Does anyone know why this is happening and how I can fix it? Also, not sure if it's related but about 1 in every 3 logins just hangs at a completely blank blue screen (no start button, taskbar, or quick launch buttons) and needs to be disconnected / reset by an admin. Edit I just noticed that the desktop itself doesn't even respond to click events. It's almost like the entire desktop is missing. At first I thought it didn't respond to right-click events because of an AD policy, but then I noticed if you open the Start Menu and click the desktop, the start menu doesn't shut like it should

    Read the article

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • Send mail on event log error trigger safe check frequency

    - by Zeb Rawnsley
    I want to use powershell to alert me when an error occurs in the event viewer on my new Win2k12 Standard Server, I was thinking I could have the script execute every 10mins but don't want to put any strain on the server just for event log checking, here is the powershell script I want to use: $SystemErrors = Get-EventLog System | Where-Object { $_.EntryType -eq "Error" } If ($SystemErrors.Length -gt 0) { Send-MailMessage -To "[email protected]" -From $env:COMPUTERNAME + @company.co.nz" -Subject $env:COMPUTERNAME + " System Errors" -SmtpServer "smtp.company.co.nz" -Priority High } What is a safe frequency I can run this script at without hurting my server? Hardware: Intel Xeon E5410 @ 2.33GHz x2 32GB RAM 3x 7200RPM S-ATA 1TB (2x RAID1) Edit: With the help of Mathias R. Jessen's answer, I ended up attaching an event to the application & system log with the following script: Param( [string]$LogName ) $ComputerName = $env:COMPUTERNAME; $To = "[email protected]" $From = $ComputerName + "@company.co.nz"; $Subject = $ComputerName + " " + $LogName + " Error"; $SmtpServer = "smtp.company.co.nz"; $AppErrorEvent = Get-EventLog $LogName -Newest 1 | Where-Object { $_.EntryType -eq "Error" }; If ($AppErrorEvent.Length -eq 1) { $AppErrorEventString = $AppErrorEvent | Format-List | Out-String; Send-MailMessage -To $To -From $From -Subject $Subject -Body $AppErrorEventString -SmtpServer $SmtpServer -Priority High; };

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • How to setup Calendar permissions for group to group

    - by Sorean
    I've been scouring the internet and so far have only been able to find examples of how to grant calendar permissions from one user to another using the Add-MailboxFolderPermission command. This is great and it was okay for when they only had a handful of users. But going forward it's not realistic to have to set individual calendar permissions for all calendars for each new user. Layout of security groups already created. Each group has a few people assigned to it. Techs Managers Admin What I am trying to accomplish is set it up so that anyone that belongs to the Managers group can view the calendars of the Tech group. Admins can view and edit the Tech group. I've found an example of adding just the security group name but I get an error of: [PS] C:\Windows\system32add-MailboxFolderPermission -Identity Techs:\Calendar -User "Admin" -AccessRights Owner The user "Admin" is either not valid SMTP address, or there is no matching information. + CategoryInfo : NotSpecified: (0:Int32) [Add-MailboxFolderPermission], InvalidExternalUserIdException + FullyQualifiedErrorId : 39352699,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Am I creating groups wrong? Am I using the wrong commands? Any guidance would be greatly appreciated.

    Read the article

  • Why is my RapidSSL Certificate chain is not trusted on ubuntu?

    - by olouv
    I have a website that works perfectly with Chrome & other browser but i get some errors with PHP in CLI mode so i'm investigating it, running this: openssl s_client -showcerts -verify 32 -connect dev.carlipa-online.com:443 Quite suprisingly my HTTPS appears untrusted with a Verify return code: 27 (certificate not trusted) Here is the raw output : verify depth is 32 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=20:unable to get local issuer certificate verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify error:num=27:certificate not trusted verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 So GeoTrust Global CA appears to be not trusted on the system (Ubuntu 11.10). Added Equifax_Secure_CA to try to solve this... But i get in this case Verify return code: 19 (self signed certificate in certificate chain) ! Raw output : verify depth is 32 CONNECTED(00000003) depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify error:num=19:self signed certificate in certificate chain verify return:1 depth=3 C = US, O = Equifax, OU = Equifax Secure Certificate Authority verify return:1 depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA verify return:1 depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA verify return:1 depth=0 serialNumber = khKDXfnS0WtB8DgV0CAdsmWrXl-Ia9wZ, C = FR, O = *.carlipa-online.com, OU = GT44535187, OU = See www.rapidssl.com/resources/cps (c)12, OU = Domain Control Validated - RapidSSL(R), CN = *.carlipa-online.com verify return:1 Edit Looks like my server does not trust/provide the Equifax Root CA, however i do correctly have the file in /usr/share/ca-certificates/mozilla/Equifax...

    Read the article

  • stunnel client uses improper SNI when talking to Apache

    - by Huckle
    I have stunnel listening on port 80 and acting as a client connecting to Apache listening on port 443. Configuration is below. What I'm finding is that if I attempt to connect to localhost:80 the connection is fine but if I connect to 127.0.0.1:80 When I check Apache's logs it indicates that stunnel is using localhost as the SNI both times, but the HTTP request lists localhost in one case and 127.0.0.1 in another. Is it possible to tell stunnel to either use whatever is in the HTTP request or to somehow configure two clients each with different SNI values? stunnel.conf: debug = 7 options = NO_SSLv2 [xmlrpc-httpd] client = yes accept = 80 connect = 443 Apache error.log: [error] Hostname localhost provided via SNI and hostname 127.0.0.1 provided via HTTP are different Apache access.log: "GET / HTTP/1.1" 200 2138 "-" "Wget/1.13.4 (linux-gnu)" "GET / HTTP/1.1" 400 743 "-" "Wget/1.13.4 (linux-gnu)" wget: $wget -d localhost ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: localhost Connection: Keep-Alive ---request end--- $wget -d 127.0.0.1 ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive ---request end--- edit: Apache Config Nothing out of the ordinary, it's just a virtual host listening to 443 <VirtualHost *:443>

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • WS2008 NTP - Using time.windows.com,0x9 - Time always skewed forwards

    - by David
    I have a domain controller configured to use time.windows.com (with 0x09 flags set). I've noticed that frequently the systems' clock is fast - it varies from 10 minutes to even 45 minutes. I always have to keep resetting the system date/time back to what it should be. When I run "w32tm /query /source" it tells me it's using time.windows.com, and obviously I trust Microsoft not to serve incorrect times, but why is my server's clock fast? EDIT: There are a few Time-Service events in the System log: Event ID: 142 Message: The time service has stopped advertising as a time source because the local clock is not synchronized. Event ID: 139 Message: The time service has started advertising as a time source. These two messages appear in pairs every hour or so. Event 142 appears 14 to 16 minutes after 139 appears. Going back a few months, these events appear: Event ID: 35 Message: The time service is now synchronizing the system time with the time source time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 37 Message: The time provider NtpClient is currently receiving valid time data from time.windows.com,0x9 (ntp.m|0x9|0.0.0.0:123-65.55.21.21:123). Event ID: 47 Message: Time Provider NtpClient: No valid response has been received from manually configured peer time.windows.com,0x9 after 8 attempts to contact it. This peer will be discarded as a time source and NtpClient will attempt to discover a new peer with this DNS name. The error was: The time sample was rejected because: The peer is not synchronized, or it has been too long since the peer's last synchronization. These three events only appear once in the log, back in October.

    Read the article

  • Arch linux - strange behaviour after installing fglrx

    - by kosto
    I have a problem with drivers on arch linux. I installed catalyst through unnoficial catalyst repo as wiki says. pacman -S catalyst catalyst-utils aticonfig --initial After this operation i rebooted the system. KDM loaded succesfully, but when i tried to switch to console (ctrl+alt+1/2/3) i saw only some strange dots, like pixels from the text were splitted on the whole screen. I was able to go back to kdm and enter the account details tho. This gave me a hang just before kde loaded. Here's a video where i'm showing above actions. Anybody knows what caused the problem? I can still chroot to fix some issues. Thanks for interest. http://glothriel.org/arch/arch_problem.ogg same thing on gnome / gdm, that's my second try on installing catalyst on arch. Open drivers suck the battery 2x faster. ___________EDIT_____________ Ok, i found a sollution, so i'm posting if someone else shares my problem. Catalyst does not support KMS, so you need to disable it from grub. You must know where are your /etc and /boot paritions mounted. If you have only one partition for / it's even simplier. Mount / on /mnt mount /dev/sdaX /mnt where X is number of the partition where is your / installed arch-chroot /mnt nano /etc/default/grub and add line: GRUB_CMDLINE_LINUX="nomodeset" save and quit then run (this will delete your windows grub configuration) grub-mkconfig -o /boot/grub/grub.cfg exit umount /mnt reboot

    Read the article

  • How to collect the performance data of a server during an unreachable/down period using Nagios?

    - by gsc-frank
    Some time services and host stop responding due to a poor server performance. I mean, if for some reason (could be lot of concurrency services access, a expensive backup execution on the server or whatever that consume tons of server resources) a server performance is very degraded, that could lead that the server isn't capable to establish any "normal network communication" (without trigger whatever standards timeouts defined for such communication). Knowing host's performance data (cpu, memory, ...) in case of available during that period (host is not down and despite of its performance degradation still allow plugins collect performance data) could be very useful for sysadmin to try to determine what cause the problem, or at least, if the host performance was good and don't interfered at all in the host/service down. This problem could be solved using remote active (NRPE) or remote passive (NSCA) if such remote solutions could store (buffered) perf data to be send to central Nagios server when host performance or network outage allow it. I read the doc of both solutions and can't find any reference to such buffer mechanism neither what happened in case that NSCA can't reach Nagios server. Any idea of how solve this lack of info? so useful for forensic analysis. EDIT: My questions isn about which tools I can use to debug perf problems or gather perf data to analysis, but is about how collect (using Nagios) host perf data even during a network outage for its posterior analysis (kind of forensic analysis). The idea is integrate such data to Nagios graphers like pnp4nagios and NagiosGrapther. I know that I could install tools like Cacti in each of my host, and have a kind of performance data collection redundancy, but I really want avoid that and try to solve all perf analysis requirements with one tools: Nagios

    Read the article

  • Ubuntu raid 1 write errors

    - by Micah
    I have an Ubuntu server set up with two SATA drives in a RAID 1 configuration with MDADM. The machine is used to record raw video, which involves a lot of writing to the disk. Sometimes during video recording the computer will crash, will the following errors in kern.log: Mar 15 10:39:41 video kernel: [414501.629864] ata2.00: exception Emask 0x10 SAct 0x0 SErr 0x400100 action 0x6 Mar 15 10:39:41 video kernel: [414501.629870] ata2.00: BMDMA stat 0x26 Mar 15 10:39:41 video kernel: [414501.629875] ata2.00: SError: { UnrecovData Handshk } Mar 15 10:39:41 video kernel: [414501.629880] ata2.00: failed command: WRITE DMA EXT Mar 15 10:39:41 video kernel: [414501.629889] ata2.00: cmd 35/00:00:28:6d:f6/00:04:06:00:00/e0 tag 0 dma 524288 out Mar 15 10:39:41 video kernel: [414501.629891] res 51/84:b1:77:6e:f6/84:02:06:00:00/e0 Emask 0x30 (host bus error) Mar 15 10:39:41 video kernel: [414501.629896] ata2.00: status: { DRDY ERR } Mar 15 10:39:41 video kernel: [414501.629899] ata2.00: error: { ICRC ABRT } Mar 15 10:39:41 video kernel: [414501.629910] ata2.00: hard resetting link Mar 15 10:39:41 video kernel: [414501.973009] ata2.01: hard resetting link Mar 15 10:39:41 video kernel: [414502.482642] ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 15 10:39:41 video kernel: [414502.482658] ata2.01: SATA link down (SStatus 0 SControl 300) Mar 15 10:39:41 video kernel: [414502.546160] ata2.00: configured for UDMA/133 Mar 15 10:39:41 video kernel: [414502.546203] ata2: EH complete Is this the result of faulty drives? Is software RAID just not performant enough for data rates ~15 MB/s, even with a quad-core i7? Thanks for your help. Edit: cat /proc/mdstat returns this: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb1[1] sda1[0] 976760768 blocks [2/2] [UU] unused devices: <none>

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • Acer Aspire One getting extremely hot

    - by ascom
    I have an Acer Aspire One D250-1197. I really better type fast before it overheats again... For some reason, I'm having a problem with heat on my netbook only when I run Joli OS (Ubuntu 9.10 LTS?). When I leave it idle, with nothing running (other than the regular Joli OS desktop and a couple of doing-nothing terminals), heat slowly builds up to the point where the netbook is burning hot to the touch. I have never had this problem when running Windows 7 Starter (even though it gives me plenty of other headaches). It seems that the fan is spinning, but not fast enough to keep up with the heat buildup. Is there something wrong with the fan drivers? The computer doesn't seem to recognize that it is overheating. What can I do to solve this problem (other than shut it off or use Windows)? I'm currently on the wrong side of Earth (I mean, on vacation), so I just need a temporary fix, such as a driver I can install. Also, I have to use Linux, because I have to share out the wired connection in hotels wirelessly to the iPhones. EDIT: I'm switching from Joli OS to a more "proper" and up to date distribution (Xubuntu 13.04). I'll see if it still has the heat problem and try @nod's cpufreq idea.

    Read the article

  • Apache+PHP on Windows Server 2008

    - by Álvaro G. Vicario
    I've installed Apache/2.2 and PHP/5.3 lots of times under Windows XP, Windows Vista and Windows Server 2003. The official *.msi installers work fine and configure everything. Now I need to install them into a Windows Server 2008 R2 Standard 64-bit box and I'm facing nothing but problems: There are no official 64 bit binaries for Apache and no binaries at all for PHP (official or third-party). It's alright, I'll do with good 32 bits, but it's kind of surprising. Official documentation is vague, generic and completely unaware of UAC or any recent Windows security feature. The PHP installer is unable to configure mod_php and the Apache installer is unable to configure... well, Apache. After three hours I've finally reached the point where I'm installing everything in the root folder and assigning full control access to all users in all files and directories and all I've got is a PHP-less Apache server that's able to serve static pages. So I guess it's time to stop and think. My question is: Has anyone installed an Apache+PHP production server under Windows Server 2008 in a serious, secure and reliable way and documented the whole process? Or should I just find a bundle like XAMPP and the like that requires no installation? === EDIT === I've installed Xampp Lite 1.7.3 and everything was working in 5 minutes. I'd still like to find some documentation about installing the original packages: XAMPP installs tons of stuff I don't need and offers no tool to enable and disable PHP extensions.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >