Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 889/998 | < Previous Page | 885 886 887 888 889 890 891 892 893 894 895 896  | Next Page >

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • Gnome Install Error (1) [closed]

    - by Guy1984
    I'm trying to install Gnome on my Ubuntu 12.04 P.Pangolin and getting the following errors: root@***:~# sudo apt-get install gnome-core gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done gnome-core is already the newest version. gnome-session-fallback is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 5 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up bluez (4.98-2ubuntu7) ... start: Job failed to start invoke-rc.d: initscript bluetooth, action "start" failed. dpkg: error processing bluez (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of gnome-bluetooth: gnome-bluetooth depends on bluez (>= 4.36); however: Package bluez is not configured yet. dpkg: error processing gnome-bluetooth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-shell: gnome-shell depends on gnome-bluetooth (>= 3.0.0); however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-shell (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-user-share: gnome-user-share depends on gnome-bluetooth; however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-user-share (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-core: gnome-core depends on gNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already nome-bluetooth (>= 3.0); however: Package gnome-bluetooth is not configured yet. gnome-core depends on gnome-shell (>= 3.0); however: Package gnome-shell is not configured yet. gnome-core depends on gnome-user-share (>= 3.0); however: Package gnome-user-share is not configured yet. dpkg: error processing gnome-core (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: bluez gnome-bluetooth gnome-shell gnome-user-share gnome-core E: Sub-process /usr/bin/dpkg returned an error code (1) Any thoughts?

    Read the article

  • Gnome Install Error (1) [closed]

    - by Guy1984
    I'm trying to install Gnome on my Ubuntu 12.04 P.Pangolin and getting the following errors: root@***:~# sudo apt-get install gnome-core gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done gnome-core is already the newest version. gnome-session-fallback is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 5 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up bluez (4.98-2ubuntu7) ... start: Job failed to start invoke-rc.d: initscript bluetooth, action "start" failed. dpkg: error processing bluez (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of gnome-bluetooth: gnome-bluetooth depends on bluez (>= 4.36); however: Package bluez is not configured yet. dpkg: error processing gnome-bluetooth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-shell: gnome-shell depends on gnome-bluetooth (>= 3.0.0); however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-shell (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-user-share: gnome-user-share depends on gnome-bluetooth; however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-user-share (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-core: gnome-core depends on gNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already nome-bluetooth (>= 3.0); however: Package gnome-bluetooth is not configured yet. gnome-core depends on gnome-shell (>= 3.0); however: Package gnome-shell is not configured yet. gnome-core depends on gnome-user-share (>= 3.0); however: Package gnome-user-share is not configured yet. dpkg: error processing gnome-core (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: bluez gnome-bluetooth gnome-shell gnome-user-share gnome-core E: Sub-process /usr/bin/dpkg returned an error code (1) Any thoughts?

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • SharePoint threw "Unknown SQL Exception 206 occured." Anyone familiar with this?

    - by dalehhirt
    Our SharePoint instance threw the following errors when attempting to access data through a Content Query Tool: 04/02/2010 10:45:06.12 w3wp.exe (0x062C) 0x1734 Windows SharePoint Services Database 5586 Critical Unknown SQL Exception 206 occured. Additional error information from SQL Server is included below. Operand type clash: uniqueidentifier is incompatible with datetime 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical A runtime exception was detected. Details follow. Message: Operand type clash: uniqueidentifier is incompatible with datetime Techinal Details: System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData(... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 Office Server Office Server General 900n Critical ...) at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlC 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception (Watson Reporting Cancelled) System.Data.SqlClient.SqlException: Operand type clash: uniqueidentifier is incompatible with datetime at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteRead... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...er(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand command, ... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...CommandBehavior behavior) at Microsoft.SharePoint.SPSqlClient.ExecuteQuery(Boolean& bSucceed) at Microsoft.SharePoint.Library.SPRequestInternalClass.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.Library.SPRequest.CrossListQuery(String bstrUrl, String bstrXmlWebs, String bstrXmlLists, String bstrXmlQuery, ISP2DSafeArrayWriter pCallback, Object& pvarColumns) at Microsoft.SharePoint.SPWeb.GetSiteData(SPSiteDataQuery query) at Microsoft.SharePoint.Publishing.CachedArea.GetCrossListQuery(SPSiteDataQuery query, SPWeb currentContext) at Microsoft.SharePoint.Publishing.CrossListQueryCache.GetSiteData(CachedArea cachedArea, SPWeb web, SPSiteDataQuery qu... 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 8vyd Exception ...ery) 04/02/2010 10:45:06.25 w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning Error occured while processing a Content Query Web Part. Performing the following query ' 04/02/2010 10:45:06.25* w3wp.exe (0x062C) 0x1734 CMS Publishing 78ed Warning ...ue" Type="Number"/ The farm is MOSS 2007 with SQL Server 2005 backend. Any ideas are welcomed. Dale

    Read the article

  • Lion built-in VPN client times out connecting to Windows 2003 PPTP server

    - by beporter
    I have a new iMac with OS X 10.7 (Lion) on it that refuses to connect to a PPTP-based VPN server (running Windows 2003 SBS). To shortcut past a lot of questions: There is a Dell workstation running Windows 7 on the same LAN as the Mac that is able to establish a PPTP connection to the same VPN server using the same credentials. That would seem to rule out any possible problems with the server, the port forwards on the server's firewall, the internet connection between the two, and the router local to the Dell and iMac. Here's a "verbose" dump of the PPP log from the iMac: Tue Sep 6 10:13:11 2011 : using link 0 Tue Sep 6 10:13:11 2011 : Using interface ppp0 Tue Sep 6 10:13:11 2011 : Connect: ppp0 socket[34:17] Tue Sep 6 10:13:11 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0, interfaceIndex: 0, Protocol: None, Private Port: 0, Public Address: 45f6f181, Public Port: 0, TTL: 0. Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 inconsistent. is Connected: 1, Previous interface: 4, Current interface 0 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 initialized. is Connected: 1, Previous publicAddress: (0), Current publicAddress 45f6f181 Tue Sep 6 10:13:11 2011 : PPTP port-mapping for en0 fully initialized. Flagging up Tue Sep 6 10:13:14 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:17 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:20 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:23 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:26 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:29 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:32 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:35 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:38 2011 : sent [LCP ConfReq id=0x1 ] Tue Sep 6 10:13:41 2011 : LCP: timeout sending Config-Requests Tue Sep 6 10:13:41 2011 : Connection terminated. Tue Sep 6 10:13:41 2011 : PPTP disconnecting... Tue Sep 6 10:13:41 2011 : PPTP clearing port-mapping for en0 Tue Sep 6 10:13:41 2011 : PPTP disconnected The error seems to be focused around the line, LCP: timeout sending Config-Requests, but I haven't had any luck in finding troubleshooting information for this. I've tried completely deleting the entire VPN "connection" from the Network prefpane and recreating it from scratch. I am certain the connection details are correct because they exactly match what successfully connects from the Win7 machine sitting next to the iMac. Any suggestions?

    Read the article

  • netsh wlan add profile not importing encrypted passphrase

    - by sirlancelot
    I exported a wireless network connection profile from a Windows 7 machine correctly connected to a WiFi network with a WPA-TKIP passphrase. The exported xml file shows the correct settings and a keyMaterial node which I can only guess is the encrypted passphrase. When I take the xml to another Windows 7 computer and import it using netsh wlan add profile filename="WiFi.xml", it correctly adds the profile's SSID and encryption type, but a balloon pops up saying that I need to enter the passphrase. Is there a way to import the passphrase along with all other settings or am I missing something about adding profiles? Here is the exported xml with personal information removed: <?xml version="1.0"?> <WLANProfile xmlns="http://www.microsoft.com/networking/WLAN/profile/v1"> <name>[removed]</name> <SSIDConfig> <SSID> <hex>[removed]</hex> <name>[removed]</name> </SSID> <nonBroadcast>false</nonBroadcast> </SSIDConfig> <connectionType>ESS</connectionType> <connectionMode>auto</connectionMode> <autoSwitch>false</autoSwitch> <MSM> <security> <authEncryption> <authentication>WPAPSK</authentication> <encryption>TKIP</encryption> <useOneX>false</useOneX> </authEncryption> <sharedKey> <keyType>passPhrase</keyType> <protected>true</protected> <keyMaterial>[removed]</keyMaterial> </sharedKey> </security> </MSM> </WLANProfile> Any help or advice is appreciated. Thanks. Update: It seems if I export the settings using key=clear, the passphrase is stored in the file unprotected and I can import the file on another computer without issue. I've updated my question to reflect my findings.

    Read the article

  • My current iptable configuration doesn't work [on hold]

    - by Brad
    sudo chkconfig iptables off /etc/init.d/iptables on ### Clear/flush iptables sudo iptables -F sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT ### Allow SSH iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT ### Allow YUM updates sudo iptables -A OUTPUT -o eth0 -p tcp --dport 80 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp --dport 443 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT ### Add your rules form the link above, here # ftp,smtp,imap,http,https,pop3,imaps,pop3s sudo iptables -A INPUT -i eth0 -p tcp -m multiport --dports 21,25,143,80,443,110,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp -m multiport --sports 21,25,143,80,110,443,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT ## allow dns sudo iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT && sudo iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT # handling pings sudo iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT sudo iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT # manage ddos attacks sudo iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT ## Implement some logging so that we know what's getting dropped sudo iptables -N LOGGING sudo iptables -A INPUT -j LOGGING sudo iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7 sudo iptables -A LOGGING -j DROP # once a rule affects traffic then it is no longer managed # so if the traffic has not been accepted, block it sudo iptables -A INPUT -j DROP sudo iptables -I INPUT 1 -i lo -j ACCEPT sudo iptables -A OUTPUT -j DROP # allow only internal port forwarding sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT sudo iptables -P FORWARD DROP # create an iptables config file sudo iptables-save > /root/dsl.fw ### Append the following to the rc.local file sudo nano /etc/rc.local ####--- /sbin/iptables-restore < sudo /root/dsl.fw ####--- /etc/init.d/iptables save ## check to see if this setting is working great. sudo service iptables restart ## log out/in testing sudo chkconfig iptables on What is the problem with this setup? If I restart the server it doesn't allow me back in SSH, and there may be a problem with Yum Original source of information: https://gist.github.com/Jonathonbyrd/1274837#file-instructions

    Read the article

  • How to calculate CPU % based on raw CPU ticks in SNMP

    - by bjeanes
    According to http://net-snmp.sourceforge.net/docs/mibs/ucdavis.html#scalar_notcurrent ssCpuUser, ssCpuSystem, ssCpuIdle, etc are deprecated in favor of the raw variants (ssCpuRawUser, etc). The former values (which don't cover things like nice, wait, kernel, interrupt, etc) returned a percentage value: The percentage of CPU time spent processing user-level code, calculated over the last minute. This object has been deprecated in favour of 'ssCpuRawUser(50)', which can be used to calculate the same metric, but over any desired time period. The raw values return the "raw" number of ticks the CPU spent: The number of 'ticks' (typically 1/100s) spent processing user-level code. On a multi-processor system, the 'ssCpuRaw*' counters are cumulative over all CPUs, so their sum will typically be N*100 (for N processors). My question is: how do you turn the number of ticks into percentage? That is, how do you know how many ticks per second (it's typically — which implies not always — 1/100s, which either means 1 every 100 seconds or that a tick represents 1/100th of a second). I imagine you also need to know how many CPUs there are or you need to fetch all the CPU values to add them all together. I can't seem to find a MIB that gives you an integer value for # of CPUs which makes the former route awkward. The latter route seems unreliable because some of the numbers overlap (sometimes). For example, ssCpuRawWait has the following warning: This object will not be implemented on hosts where the underlying operating system does not measure this particular CPU metric. This time may also be included within the 'ssCpuRawSystem(52)' counter. Some help would be appreciated. Everywhere seems to just say that % is deprecated because it can be derived, but I haven't found anywhere that shows the official standard way to perform this derivation. The second component is that these "ticks" seem to be cumulative instead of over some time period. How do I sample values over some time period? The ultimate information I want is: % of user, system, idle, nice (and ideally steal, though there doesn't seem to be a standard MIB for this) "currently" (over the last 1-60s would probably be sufficient, with a preference for smaller time spans).

    Read the article

  • Installing gnome on Linode with Ubuntu 9.10 x64 - remote VNC/RDP

    - by Kieran Benton
    Hi, I'm a self confessed Linux newbie, having lived and worked mostly within the Windows world for most of my life. I'm making the effort to try moving my virtual host from a Windows box to a Linode instance to try and better learn Linux, and one of the uses I occasionally have with my current Windows VPS is to RDP into it and browse the internet. I'm aware that this is probably not best practice (from either performance or security), and most of the time I will be learning from the shell, but I do occasionally need to boot into a GUI. Because of this, I'd like the ability within my Ubuntu installation on Linode to start/stop Windows X and Gnome at will after SSHing in (startx? gdm?), so I've tried: apt-get install ubuntu-desktop Reboot startx But I've got an error that no amount of googling has helped me with so far, which I'm assuming is something to do with the fact the box is headless and X needs some more configuration that is beyond me at the moment: root@local:~# startx hostname: Unknown host xauth: creating new authority file /root/.Xauthority xauth: creating new authority file /root/.Xauthority xauth: (argv):1: bad display name "local.kieranbenton.com:0" in "list" command xauth: (stdin):1: bad display name "local.kieranbenton.com:0" in "add" command X.Org X Server 1.6.4 Release Date: 2009-9-27 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-23-server x86_64 Ubuntu Current Operating System: Linux local.kieranbenton.com 2.6.31.5-x86_64-linode9 #1 SMP Mon Oct 26 19:35:25 UTC 2009 x86_64 Kernel command line: root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ro Build Date: 26 October 2009 05:19:56PM xorg-server 2:1.6.4-2ubuntu4 (buildd@) Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Dec 2 15:50:23 2009 Primary device is not PCI (==) Using default built-in configuration (21 lines) (EE) open /dev/fb0: No such file or directory (EE) No devices detected. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Can anyone give me any pointers as to how to go from here and get VNC/RDP setup? (RDP would be preferred?). Thanks.

    Read the article

  • Installing gnome on Linode with Ubuntu 9.10 x64 - remote VNC/RDP

    - by Kieran Benton
    Hi, I'm a self confessed Linux newbie, having lived and worked mostly within the Windows world for most of my life. I'm making the effort to try moving my virtual host from a Windows box to a Linode instance to try and better learn Linux, and one of the uses I occasionally have with my current Windows VPS is to RDP into it and browse the internet. I'm aware that this is probably not best practice (from either performance or security), and most of the time I will be learning from the shell, but I do occasionally need to boot into a GUI. Because of this, I'd like the ability within my Ubuntu installation on Linode to start/stop Windows X and Gnome at will after SSHing in (startx? gdm?), so I've tried: apt-get install ubuntu-desktop Reboot startx But I've got an error that no amount of googling has helped me with so far, which I'm assuming is something to do with the fact the box is headless and X needs some more configuration that is beyond me at the moment: root@local:~# startx hostname: Unknown host xauth: creating new authority file /root/.Xauthority xauth: creating new authority file /root/.Xauthority xauth: (argv):1: bad display name "local.kieranbenton.com:0" in "list" command xauth: (stdin):1: bad display name "local.kieranbenton.com:0" in "add" command X.Org X Server 1.6.4 Release Date: 2009-9-27 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-23-server x86_64 Ubuntu Current Operating System: Linux local.kieranbenton.com 2.6.31.5-x86_64-linode9 #1 SMP Mon Oct 26 19:35:25 UTC 2009 x86_64 Kernel command line: root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ro Build Date: 26 October 2009 05:19:56PM xorg-server 2:1.6.4-2ubuntu4 (buildd@) Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Dec 2 15:50:23 2009 Primary device is not PCI (==) Using default built-in configuration (21 lines) (EE) open /dev/fb0: No such file or directory (EE) No devices detected. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Can anyone give me any pointers as to how to go from here and get VNC/RDP setup? (RDP would be preferred?). Thanks.

    Read the article

  • Windows 7 is shutting down unexpectedly, according to the logs.

    - by dlamblin
    Here's a message from my eventvwr EventLog (Windows Logs System): The previous system shutdown at 11:51:15 AM on ?7/?29/?2009 was unexpected. This is funny because I was wondering why the system shut down while I was playing Civilizations IV full screen. Now I know. It was unexpected. Has anyone encountered and resolved this? A little background: I am running Windows 7 RC inside VMWare Fusion 2 (just updated a few months back) on a MacBook (Bitterly not Pro) aluminum-body. Windows 7 occasionally will shut down. This isn't a quick turn-off, it's a shutdown where all the programs are exited, the system waits until they quit (and Civ4 doesn't prompt me to save), it even installed Windows Updates before restarting. And yes it is restarting right after the shutdown. Because I run a game in full screen mode I do not notice any dialog with a countdown timer or anything like that that might be a warning. As I have iStat on my dashboard widgets I can see about 8 temperature monitors. I have seen the CPU get up to 74C before, but during the shutdown, though it seemed hot to the touch (always is), it read 61C for the CPU, 60C for heatsink A, 50C for heatsink B and in the 30s-40s for the enclosure and harddrives. As I type this now, the temps are actually higher, so I don't think the temperature caused it. I have at least six such events dating first from 5/17 which was a week after installing Windows 7. I did find one information level warning from USER32 in the system log that says: The process C:\Windows\system32\svchost.exe (DLAMBLIN-WIN7) has initiated the restart of computer DLAMBLIN-WIN7 on behalf of user NT AUTHORITY\SYSTEM for the following reason: Operating System: Recovery (Planned) Reason Code: 0x80020002 Shutdown Type: restart Comment: And another 15 minutes before that from Windows Update: Restart Required: To complete the installation of the following updates, the computer will be restarted within 15 minutes: - Cumulative Security Update for Internet Explorer 8 for Windows 7 Release Candidate for x64-based Systems (KB972260) Which I think kind of explains it. Though I don't know why restarting after an update would create an error event of "shutdown was unexpected", isn't that pretty odd? Now, how do I set it to never restart after an update unless I click something. Application of solution: As fretje reminded me, there's a couple of configurable settings for this, in windows 7 they're much in the same place as in Windows 2000 SP3 and XP SP1. Running gpedit.msc pops up a window that looks like: Windows 7 has changed the order and added a couple of newer options I've italicized: Do not display 'Install Updates and Shut Down' in Shut Down Windows dialog box Do not adjust default option to 'Install Updates and Shut Down' in Shut Down Windows dialog box Enabling Windows Power Management to automatically wake up the system to install scheduled updates Configure Automatic Updates Specify intranet Microsoft update service location Automatic Updates detection frequency Allow non-administrators to receive update notifications Turn on Software Notifications Allow Automatic Updates immediate installation Turn on recommended updates via Automatic Updates No auto-restart with logged-on users for scheduled Automatic Updates Re-prompt for restart with scheduled installations. Delay Restart for scheduled installations Reschedule Automatic Updates schedule

    Read the article

  • Why can't I play DVDs on Windows 8 Pro with Media Center Pack?

    - by ligos
    I have a laptop with Windows 8 Pro with Media Center (64 bit), but neither Media Player or Media Center can play DVDs. Have I done something wrong? Did the Feature Pack not install correctly? Should this work? Can I somehow uninstall and reinstall the Media Pack? Details So I upgraded by Windows 7 Home Premium laptop to Windows 8 Pro based on Microsoft's low pricing. I also grabbed my free upgrade to Media Pack and followed the instructions on that page to add my feature pack. Alas! I still cannot play DVDs via either Media Center or Player. Various Context Thinking I might need to re-install the pack, I found that I could no longer add any more feature packs (searching for add features settings only shows Turn Windows Features On and Off). Media Centre and Media Player are both enabled in Windows Features. I cannot see any way to remove or downgrade from the Media Pack, nor to add any more feature packs. I installed a codec pack (32bit) from Shark007, which has not allowed me to play DVDs (although did allow me to play various other media files). Media Player can play DTV recorded on another Windows 7 box, but Media Center cannot. VLC plays DVDs OK, but I'd prefer to figure out what the root cause of this problem is. There were no errors or other indications that the Media Pack failed to install; the installation itself was quite smooth. Although I have not checked my event log in detail. Before upgrading to Windows 7, I could play DVDs OK. Screenshots System Information, showing I have Windows 8 Pro with Media Center When playing a DVD, Media Player gives and error: The selected file has an extension that is not recognised by windows... When you click Yes, it fails saying: Windows Media Player cannot find the file... Media Center says: The file type is not recognisd and cannot be played, along with some codec related stuff. I can browse the files OK via My Computer on any video DVD.

    Read the article

  • Sendmail - Multiple Domains, One Box - Blocking One Or Two Domains

    - by TangoOversway
    I have a number of domains hosted at a web hosting service. They use sendmail to handle incoming email. I have six domains on this service (which we can call aaa.com, bbb.com and so on). Each email account has the same name and one email box. In other words, [email protected], [email protected], [email protected] and all the others go into one box, /var/spool/mail/tango, where my email program on my desktop picks it up. I have done very little work in sendmail. I haven't had to, and I've been warned it's a steep learning curve. But now I'm running into an issue. I was in a business situation where, for years, my email address was on the website for aaa.com. (We won't go into why this was necessary - it wasn't my preference and it's in the past.) Now I'm using [email protected] instead of [email protected]. I was getting about 1,000 or more pieces of spam a day, but SpamAssassin and my own email program caught about 75% of that. (Which still left stuff to delete.) Now, after checking, I see that 90% or more goes to [email protected], the one that was on the web for years. I'd like to deactivate [email protected] and possibly [email protected] and [email protected], but want to keep using [email protected]. Remember, email to tango at any of these domains will go into one email box. I've had people tell me that sendmail can be configured so I can deactivate [email protected] (and other domains) and still use [email protected] (and others, if I want to). In other words, I can configure sendmail to use this account on some domains and not others. One of the people who was teling me this was in tech support at the hosting service. But I wrote to tech support with a work order to do this and now I'm told it can't be done. I can modify config files myself on this account if needed, but I was hoping to just let them do it. (I love delegation -- it means I spend more time doing my stuff.) Is it possible to keep an email account active on one domain and not others with sendmail, when all domains are hosted on the same server? Is there a name for this process or setting? Any information would be helpful - either pointers to instructions so I can do it, or enough info so I can tell tech support, "This is where to look, and it can be done, so please pass my request on to someone who works with sendmail and knows how to do it." Is this something sendmail can do?

    Read the article

  • Disk is spinning down each minute, unable to disable it

    - by lzap
    I played with spindown and APM settings of my Samsung discs and now they spin down every minute. I want to disable it, but it seems it does not accept any of the spindown time or APM values. Nothing works, it's all the same. Please help what values should be proper for it. I do not want it to spin down at all. /dev/sda: ATA device, with non-removable media Model Number: SAMSUNG HD154UI Serial Number: S1Y6J1KZ206527 Firmware Revision: 1AG01118 Standards: Used: ATA-8-ACS revision 3b Supported: 7 6 5 4 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 2930277168 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 1430799 MBytes device size with M = 1000*1000: 1500301 MBytes (1500 GB) cache/buffer size = unknown Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 udma7 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE * Advanced Power Management feature set Power-Up In Standby feature set * SET_FEATURES required to spinup after power up SET_MAX security extension Automatic Acoustic Management feature set * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test Media Card Pass-Through * General Purpose Logging feature set * 64-bit World wide name * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters * NCQ priority information DMA Setup Auto-Activate optimization Device-initiated interface power management * Software settings preservation * SMART Command Transport (SCT) feature set * SCT Long Sector Access (AC1) * SCT LBA Segment Access (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) Security: Master password revision code = 65534 supported not enabled not locked frozen not expired: security count supported: enhanced erase 326min for SECURITY ERASE UNIT. 326min for ENHANCED SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 50024e900300cca3 NAA : 5 IEEE OUI : 0024e9 Unique ID : 00300cca3 Checksum: correct I have the very same disc which I did not "tuned" and it does not spin. But I do not know where to read the settings from. The hdparm only shows this: Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 Edit: It seems the issue was tuned daemon in RHEL6. It was too aggressive, I turned off disc tuning and it seems they are no longer spinning down.

    Read the article

  • How to set up a mini call center?

    - by Ralph
    I'm trying to figure out how to set up a miniature call center for a small business. Like, for 1-4 people to take calls, but hopefully expandable to more. We want to accept nation-wide calls, and then I guess distribute the calls among the available agents. If no one is available, I guess it should either put them in a queue and play some annoying music for them, or forward to call to an agent who has least-recently taken a call who can then quickly answer and say "please hold" until they're done their call. We want to have one phone number that customers can call. I guess we then need some kind of ACD system which would take each call and forward it to an agent based on some algorithm? Then we would need to purchase a separate phone line for each agent, plus one just for the distributor? Or do we need several "extra" lines to maintain a queue (one for each customer waiting too)? This "ACD" thing, is it just a device that you would plug a phone line into (or several?), and maybe connect to a computer, aided by some software? Or is a subscription thing that I would need from my local telephone provider? Next, the business we're running, the callers will be repeat customers. It would be helpful to automatically pull up their profile based on the incoming number. The "software" our agents will be running will just be a website where they can log in (preferably from home) and then enter some information they would obtain through the call. So, the system would have to somehow interface with the website if possible. If not, we'll just have to ask each customer for an identifier (phone number, username, customer number, or something). Is this possible? I guess each computer would need a device that the call would pass through, and then if I can somehow hook into that, then I can write some software that will interface with the site. So, where do I start? What hardware do we need to buy? What subscriptions do we need? We were thinking this magicJack might help us in accepting long-distance calls for cheaper, but my understanding is that they provide you with some weird-looking number, is there a way we could "mask" it with our toll-free number? And then pass the incoming calls through the distributor system, which would then get passed to the call-accepting device which would both allow an agent to answer the call and have a software hook? (I realize this might be partially out of the scope of SU, but I wasn't sure where else to ask. It is about computer(-aided) hardware and software anyway.) P.S.: I don't need any of that "press 1 to talk to..." or "say xyz to..." junk. Just a straight-forward, connect-to-next-available-agent system.

    Read the article

  • APC UPS replace battery light and apcupsd reporting "replace battery"

    - by mgjk
    We have an APC Smart UPS 1500. The "Replace Battery" light is on, and apcupsd reports: Emergency! Batteries have failed on UPS xxxx. Change them NOW However, from this article, http://sturgeon.apcc.com/kbasewb2.nsf/for+external/f39c4312fcaf7b948525679a005ebb78?OpenDocument it seems that it's not so clear that the UPS battery needs to be replaced. Stranger, according to the information on the UPS, an 11 minute runtime at 42.9% load running at 27.7V isn't so bad. Any thoughts about what to try next? We're a non-profit, money is an object. It would be a shame to replace a battery with a year or so left in it. # apcaccess status APC : 001,041,1017 DATE : Thu Mar 29 13:01:41 EDT 2012 HOSTNAME : oreilly2 VERSION : 3.14.6 (16 May 2009) debian UPSNAME : xxxx CABLE : Custom Cable Smart MODEL : Smart-UPS 1500 UPSMODE : Stand Alone STARTTIME: Thu Mar 29 12:57:30 EDT 2012 STATUS : ONLINE LINEV : 112.3 Volts LOADPCT : 42.9 Percent Load Capacity BCHARGE : 100.0 Percent TIMELEFT : 11.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds OUTPUTV : 112.3 Volts SENSE : High DWAKE : -01 Seconds DSHUTD : 090 Seconds LOTRANS : 106.0 Volts HITRANS : 127.0 Volts RETPCT : 000.0 Percent ITEMP : 23.8 C Internal ALARMDEL : Always BATTV : 27.7 Volts LINEFREQ : 60.0 Hz LASTXFER : No transfers since turnon NUMXFERS : 0 TONBATT : 0 seconds CUMONBATT: 0 seconds XOFFBATT : N/A SELFTEST : NO STATFLAG : 0x07000008 Status Flag SERIALNO : AS0603298896 BATTDATE : 2006-01-14 NOMOUTV : 120 Volts NOMBATTV : 24.0 Volts FIRMWARE : 601.3.D USB FW:1.5 APCMODEL : Smart-UPS 1500 END APC : Thu Mar 29 13:02:12 EDT 2012 Error when running upstest You are using a SMART cable type, so I'm entering SMART test mode mode.type = USB_UPS Setting up the port ... Hello, this is the apcupsd Cable Test program. This part of apctest is for testing Smart UPSes. Please select the function you want to perform. 1) Query the UPS for all known values 2) Perform a Battery Runtime Calibration 3) Abort Battery Calibration 4) Monitor Battery Calibration progress 5) Program EEPROM 6) Enter TTY mode communicating with UPS 7) Quit Select function number: 2 First ensure that we have a good link and that the UPS is functionning normally. Simulating UPSlinkCheck ... YWrote: Y Got: getline failed. Apparently the link is not up. Giving up.

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • Real server, Multiple IP Addresses, HyperV Virtual Server, How to partition IPs across real and Virtual NICs

    - by Steven_W
    This is a slightly difficult problem to explain without same basic background information - I'll try and refine the question later as necessary Originally, I have a single hosted server (Win 2008R2) with the following range of 8 IP addresses. - Single NIC - IP: x.x.128.72 -> x.x.128.79 - Subnet: x.x.255.192 - GW: x.x.128.65 After installing Hyper-V and setting up a single virtual server on the same box, I then wanted to assign one of the IP addresses to the virtual server, leaving everything else running normally. -- Firstly, I tried using the "External" network, but (even after setting IPs on the "Virtual Adapter" similar to Here but struggled to get networking running at all. I needed to keep the server running (otherwise I would have spent more time pursuing this approach) Q1 ... Was this a sensible thing to do ? Should I have carried on down this route ? -- I then decided to try different approach - Set the HyperV network to "Internal" (visible to Management OS) - Physical NIC - IP: x.x.128.72 -> x.x.128.75 - Subnet: x.x.255.192 - GW: x.x.128.65 - Virtual NIC - IP: x.x.128.78 - Subnet: x.x.255.252 - GW: x.x.128.72 ... { The same as the IP of the physical NIC ) - Virtual OS-NIC - IP: x.x.128.77 - Subnet: x.x.255.252 - GW: x.x.128.78 ... { The same as the IP of the host virtual-NIC ) -- Surprisingly enough, this approach actually worked, and I was able to connect from all the following: - Internet to/from physical NIC (x.x.128.72) - physical NIC (x.x.128.72) to virtual-OS-NIC (x.x.128.77) e.g. testing via ping + FTP - Internet to/from virtual-OS-NIC (x.x.128.72) -- The problem I have is that this approach seems to only last for a short while (a few hours). After this time, it seems that I lose the ability to connect from Virtual-OS-NIC to/from the internet (but I can still connect from the host-OS to the virtual-OS and from the host-OS to the internet) I have re-tested this a couple of times with the same results ... I leave the server on for a few hours (e.g. overnight), and when I come back in the morning, the Virtual-OS loses the ability to route to the internet -- I'm not quite sure what to look at next (or whether I'm going about this completely the wrong way ) One "possible relevant item" is that the host-OS is also running RRAS (Routing and Remote Access), but this is only to run a simple VPN -- Q2 - Wheat should I be looking at next ? (Any good references / recommendations of what to try) Would appreciate any thoughts or comments (even if you tell me I'm going about this the wrong way)

    Read the article

  • How to transition to Comcast with static IP address

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • Disk performance below expectations

    - by paulH
    this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed). I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B. Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks. We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'. Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume). Server A (original setup with 6Gpbs disks) D: Read (MB/s) 63 MB/s D: Write (MB/s) 170 MB/s E: Read (MB/s) 68 MB/s E: Write (MB/s) 320 MB/s Server B (original setup with 3Gpbs disks) D: Read (MB/s) 52 MB/s D: Write (MB/s) 88 MB/s E: Read (MB/s) 112 MB/s E: Write (MB/s) 130 MB/s Server A (new setup with 3Gpbs disks) D: Read (MB/s) 55 MB/s D: Write (MB/s) 85 MB/s E: Read (MB/s) 67 MB/s E: Write (MB/s) 180 MB/s Server B (new setup with 6Gpbs disks) D: Read (MB/s) 61 MB/s D: Write (MB/s) 95 MB/s E: Read (MB/s) 69 MB/s E: Write (MB/s) 180 MB/s Can anybody suggest any ideas what is going on here? The drives in use are as follows: Dell Seagate F617N ST3300657SS 300GB 15K RPM SAS Dell Hitachi HUS156030VLS600 300GB 3.5 inch 15000rpm 6GB SAS Hitachi Hus153030vls300 300GB Server SAS Dell ST3146855SS Seagate 3.5 inch 146GB 15K SAS

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • backup and restoration of a freeipa infrastructure

    - by Sirex
    I'm finding the documentation on ipa server backup and restoration sadly lacking, and being so centrally critical it's not something i'm really happy about shooting in the dark with - could some kind soul more knowledable in the matter please attempt to provide an idiot-proof guide to backing up and restoring of IPA server(s) ? Particularly the main server (the cert signing one). ...We're looking towards rolling out ipa in a two server setup (1 master, 1 replica). I'm using dns srv records to handle failover, hence a loss of the replica isn't a big deal as i could make a new one and force a resync to happen - it's losing the master that bothered me. The thing that i'm really struggling with is locating a step-by-step procedure for backing up and restoring the master server. I'm aware that whole-VM snapshot is the recommended way of doing IPA server backup, but that isn't an option at this time for us. I'm also aware that freeipa 3.2.0 includes some sort of backup command build in, but that isn't in the ipa version of centos, and i don't expect it will be for some time yet. I've been trying many different methods, but none of them seem to restore cleanly, amongst others, i've tried; a command similar to db2ldif.pl -D "cn=directory manager" -w - -n userroot -a /root/userroot.ldif the script from here to produce three ldif files -- one for the domain ({domain}-userroot), and two for the ipa server (ipa-ipaca and ipa-userroot): Most of the restores i've tried have been similar to the form of: ldif2db.pl -D "cn=directory manager" -w - -n userroot -i userroot.ldif which seems to work and reports no errors, but totally borks the ipa install on the machine and i can no longer login with either the admin password on the backed up server, or the one i set it to on installation before attempting the ldif2db command (i'm installing ipa-server and running ipa-server-install, then attempting the restore). I'm not overly bothered about losing the CA, having to rejoin the domain, losing replication etc etc (although it'd be awesome if that could be avoided) but in the instance of the main server dropping i'd really like to avoid having to re-enter all the user/group information. I guess in the instance of losing the main server i could promote the other one and replicate in the other direction, but i've not tried that, either. Has anyone done that ? tl;dr: Can someone provide an idiots guide to backing up and restoring an IPA server (preferably on CentOS 6) in a clear enough way that'd make me feel confident it'll actually work when the dreaded time comes ? Crayons are optional, but appreciated ;-) I can't be the only person struggling with this, seeing how widely used IPA is, surely ?

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • Error Installing ruby with RVM Single User mode on Arch Linux

    - by ChrisBurnor
    I've just installed RVM on ArchLinux x64 in single user mode via the recommended install script curl -L https://get.rvm.io | bash -s stable I've also installed all the requirements listed in rvm requirements However, I'm having trouble actually installing any version of ruby. And getting the following error: arch:~ % rvm install 1.9.3 No binary rubies available for: ///ruby-1.9.3-p194. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Fetching yaml-0.1.4.tar.gz to /home/christopher/.rvm/archives % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 460k 100 460k 0 0 702k 0 --:--:-- --:--:-- --:--:-- 767k Extracting yaml-0.1.4.tar.gz to /home/christopher/.rvm/src Prepare yaml in /home/christopher/.rvm/src/yaml-0.1.4. Configuring yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running ' ./configure --prefix=/home/christopher/.rvm/usr ', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/configure.log Compiling yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Installing Ruby from source to: /home/christopher/.rvm/rubies/ruby-1.9.3-p194, this may take a while depending on your cpu(s)... ruby-1.9.3-p194 - #downloading ruby-1.9.3-p194, this may take a while depending on your connection... ruby-1.9.3-p194 - #extracting ruby-1.9.3-p194 to /home/christopher/.rvm/src/ruby-1.9.3-p194 ruby-1.9.3-p194 - #extracted to /home/christopher/.rvm/src/ruby-1.9.3-p194 Skipping configure step, 'configure' does not exist, did autoreconf not run successfully? ruby-1.9.3-p194 - #compiling Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/make.log There has been an error while running make. Halting the installation. The log files are as follows: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/configure.log __rvm_log_command:32: permission denied: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/make.log make: *** No targets specified and no makefile found. Stop. arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/make.log make: *** No targets specified and no makefile found. Stop.

    Read the article

< Previous Page | 885 886 887 888 889 890 891 892 893 894 895 896  | Next Page >