Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 803/911 | < Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >

  • MYSQL - Multiple set values in one update statement [migrated]

    - by Maurzank
    MYSQL - MULTIPLE SET VALUES IN ONE UPDATE STATEMENT USING 2 TABLES AS REFERENCE AND STORING VALUES IN ONE OF THOSE TABLES WITH A SPECIFIC LOGIC. Hello people, A problem came up by making an UPDATE. The example issue is as follows: CURRENUSRTABLE +------------+-------+ | ID | STATE | +------------+-------+ | 123 | 3 | | 456 | 3 | | 789 | 3 | +------------+-------+ HISTORYTABLE +------------+------------+-----+ | ID | TRDATE | ACT | +------------+------------+-----+ | 123 | 2013-11-01 | 5 | | 456 | 2013-11-01 | 5 | | 789 | 2013-11-01 | 5 | | 123 | 2013-11-02 | 4 | | 456 | 2013-11-02 | 4 | | 789 | 2013-11-02 | 4 | | 123 | 2013-11-03 | 3 | | 456 | 2013-11-03 | 3 | | 789 | 2013-11-03 | 3 | +------------+------------+-----+ I'm using these variables: @BA=3, @DE=5, @BL=4, What I'm trying to do is an update on CURRENUSRTABLE.STATE using HISTORYTABLE.ACT with the following logic: STATE value will be updated as ACT value, except when STATE value is 4 and ACT is 3, then STATE will be 5 I made this statement: UPDATE CURRENUSRTABLE RIGHT OUTER JOIN HISTORYTABLE ON HISTORYTABLE.ID=CURRENUSRTABLE.ID SET CURRENUSRTABLE.STATE= ( SELECT CASE HISTORYTABLE.ACT WHEN @DE THEN @DE WHEN @BL THEN @BL WHEN @BA THEN CASE CURRENUSRTABLE.STATE WHEN @BL THEN @DE ELSE @BA END END ORDER BY HISTORYTABLE.TRDATE,FIELD(HISTORYTABLE.ACT,@DE,@BL,@BA) ) WHERE HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-01' I'm intentionally using "RIGHT OUTER JOIN" and "HISTORYTABLE.TRDATE BETWEEN" because I'd like to change the values in CURRENUSRTABLE using a timeframe of more than one day. If I execute this statement many times using only one day (i.e. "BETWEEN '2013-11-01' AND '2013-11-01'" and then "BETWEEN '2013-11-02' AND '2013-11-02'"... etc ) it works perfectly, but if it is executed using the dates "BETWEEN '2013-11-01' AND '2013-11-03'" the results on CURRENUSRTABLE.STATE are 3, which is wrong, it should be 5. I think the problem relies on "CASE CURRENUSRTABLE.STATE" when uses "HISTORYTABLE.TRDATE BETWEEN '2013-11-01' AND '2013-11-03'", because it reads the STATE 9 times which has not been commited yet until the statement ends. Query OK, 9 rows affected (0.00 sec) Rows matched: 9 Changed: 9 Warnings: 0 Maybe the solution is very simple, but unfortunately I've not much practice on MySQL since I've worked with it less than 2 months :) Is there any suggestions to solve this issue? PD: MySQL version is 4.1.22, I know is very old an EOL, unfortunately I have to make these statements on this version. Thanks!

    Read the article

  • Help setting up a secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Set scheduled task last result to 0x0 manually

    - by Rogier
    Every night a task runs that checks if any scheduled task has a Last Result is not equal to 0x0. If a scheduled tasks has an error like 0x1, then automatically an e-mail is sent to me. As some tasks are running only weekly, and sometimes an error occurs which results in not equal to 0x0, every night an e-mail is sent with the error message, as the Last Result column still shows the last result of 0x1. But I would like to set the Last Result column to 0x0 manually if I solved a problem, so I won't get every night an e-mail with the error message. So is it possible to set the scheduled tasks Last Result to 0x0 manually (or by a script)? @harrymc. See located script underneath that is sending the e-mail. I can easily add a criteria to ignore result 0x1 (or another code), however this is not the solution as most of the times this result is a real error and has to be e-mailed. set [email protected] set SMTPServer=SMTPserver set PathToScript=c:\scripts set [email protected] for /F "delims=" %%a in ('schtasks /query /v /fo:list ^| findstr /i "Taskname Result"') do call :Sub %%a goto :eof :Sub set Line=%* set BOL=%Line:~0,4% set MOL=%Line:~38% if /i %BOL%==Task ( set name=%MOL% goto :eof ) set result=%MOL% echo Task Name=%name%, Task Result=%result% if not %result%==0 ( echo Task %name% failed with result %result% > %PathToScript%\taskcheckerlog.txt bmail %PathToScript%\taskcheckerlog.txt -t %YourEmailAddress% -a "Warning! Failed %name% Scheduled Task on %computername%" -s %SMTPServer% -f %FromAddress% -b "Task %name% failed with result %result% on CorVu scheduler %computername%" )

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • Object Not found - Apache Rewrite issue

    - by Chris J. Lee
    I'm pretty new to setting up apache locally with xampp. I'm trying to develop locally with xampp (Ubuntu 11.04) linux 1.7.4 for a Drupal Site. I've actually git pulled an exact copy of this drupal site from another testing server hosted at MediaTemple. Issue I'll visit my local development environment virtualhost (http://bbk.loc) and the front page renders correctly with no errors from drupal or apache. The issue is the subsequent pages don't return an "Object not found" Error from apache. What is more bizarre is when I add various query strings and the pages are found (like http://bbk.loc?p=user). VHost file NameVirtualHost bbk.loc:* <Directory "/home/chris/workspace/bbk/html"> Options Indexes Includes execCGI AllowOverride None Order Allow,Deny Allow From All </Directory> <VirtualHost bbk.loc> DocumentRoot /home/chris/workspace/bbk/html ServerName bbk.loc ErrorLog logs/bbk.error </VirtualHost> BBK.error Error Log File: [Mon Jun 27 10:08:58 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ [Mon Jun 27 10:21:48 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/sites/all/themes/bbk/logo.png, referer: http://bbk.$ [Mon Jun 27 10:21:51 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ Actions I've taken: Move Rewrite module loading to load before cache module http://drupal.org/node/43545 Verify modrewrite works with .htaccess file Any ideas why mod_rewrite might not be working?

    Read the article

  • How do I make XTerm not use bold?

    - by mike
    I like using XTerm, I like its default "fixed" font, and I like using terminal colors rather than having a monochromatic terminal. However, XTerm seems to insist on using a bold version of the font whenever it's displaying a bright color: I hate hate hate the bold version of the font, but I like the brightness. The man page seems to suggest that adding "XTerm.VT100.boldMode:false" to my ~/.Xresources would disable this "feature", but it doesn't seem to have any effect. I've had it in there for months, so it's not a rebooting issue. How can I force XTerm to always use the standard, non-bold version of the fixed font, even when it's displaying bright text? Edit: Some have suggested putting "XTerm*boldMode: false" in my ~/.Xresources. That didn't help either. I've confirmed that the changes have taken effect with xrdb, though: $ xrdb -query | grep boldMode XTerm*boldMode: false And if i run xprop and click an xterm, I get "WM_CLASS(STRING) = "xterm", "XTerm"" .. so i'm definitely running real xterms. BTW, this is just a plain-vanilla Ubuntu Intrepid box. If anyone else here is running the same, can you try running: echo -e '#\e[1m#' ...and let me know whether the # on the right has a black pixel in the middle like the one on the left does?

    Read the article

  • All PHP sites stopped working on IIS7, internal server error 500

    - by TimothyP
    I installed multiple drupal 7 sites using the Web Platform Installer on Windows Server 2008. Until know they worked without any problems, but recently internal server error 500 started to show up (once every so many requests), now it happens for all requests to any of the php sites. There's not much detail to go on, and nothing changed between the time when it was working and now (well nothing I know of anyway) The log file is flooded with messages such as [09-Aug-2011 09:08:04] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:20] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:22] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:51] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:56] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:57] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:12:13] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 I have tried increasing the memory limit in php.ini as such: memory_limit = 512MB But that doesn't seem to solve the problem either. This is in the global php configuration in IIS When I looked at the sites one by one, I noticed that PHP seemed to have been disabled. PHP is not enabled. Register new PHP version to enable PHP via FastCGI So I tried to register the php version again C:\Program Files\PHP\v5.3\php-cgi.exe But when I try to apply the changes I get There was an error while performing this operation Details: Operation is not valid due to the current state of the object There doesn't seem to be any other information than that. I have no idea why all of a sudden php isn't available for the sites anymore. PS: I have rebooted IIS, the server, etc... This server is hosted on amazon S3, so I gave the server some more power Update These seem to be two different issues I used memory_limit=128MB instead of memory_limit=128M Notice the "M" instead of "MB" A memory_limit of 128M was not enough, had to increase it to 512M The first issue caused internal server errors for every request. Increasing to 512MB seemed to have solved the problem for a little while, but after a while the server errors return. Note that the PHP manager inside of IIS still shows there is no PHP available for the sites (the global config does see it as available) So the problem remains unsolved

    Read the article

  • compile lastest bnx2 driver on debian squeeze

    - by markus
    I want to upgrade the bnx2 network card driver in a Dell Power Edge R410. I downloaded the latest driver version from the broadcom website. If I want to compile the driver it fails with the following errors: make make -C bnx2/src KVER=2.6.32-5-amd64 PREFIX= make[1]: Entering directory `/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src' make -C /lib/modules/2.6.32-5-amd64/build SUBDIRS=/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src modules make[2]: Entering directory `/usr/src/linux-headers-2.6.32-5-amd64' Building modules, stage 2. MODPOST 2 modules make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-5-amd64' make[1]: Leaving directory `/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src' make -C bnx2x/src KVER=2.6.32-5-amd64 PREFIX= make[1]: Entering directory `/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src' make -C /lib/modules/2.6.32-5-amd64/build M=`pwd` modules make[2]: Entering directory `/usr/src/linux-headers-2.6.32-5-amd64' CC [M] /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.o In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1009:1: error: "PCI_VPD_LRDT_ID_STRING" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1327:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1011:1: error: "PCI_VPD_LRDT_RO_DATA" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1328:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1013:1: error: "PCI_VPD_LRDT_RW_DATA" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1329:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1019:1: error: "PCI_VPD_SRDT_END" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1334:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1032: error: conflicting types for ‘pci_vpd_lrdt_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1355: error: previous definition of ‘pci_vpd_lrdt_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1037: error: conflicting types for ‘pci_vpd_srdt_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1366: error: previous definition of ‘pci_vpd_srdt_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1042: error: conflicting types for ‘pci_vpd_find_tag’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1391: error: previous declaration of ‘pci_vpd_find_tag’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1077: error: conflicting types for ‘pci_vpd_info_field_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1377: error: previous definition of ‘pci_vpd_info_field_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1082: error: conflicting types for ‘pci_vpd_find_info_keyword’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1403: error: previous declaration of ‘pci_vpd_find_info_keyword’ was here make[5]: *** [/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.o] Fehler 1 make[4]: *** [_module_/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src] Fehler 2 make[3]: *** [sub-make] Fehler 2 make[2]: *** [all] Fehler 2 make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-5-amd64' make[1]: *** [bnx2x.o] Fehler 2 make[1]: Leaving directory `/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src' make: *** [l2build] Fehler 2

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

  • SharePoint Business Connectivity Services (BCS) Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

    - by g18c
    I am running SharePoint 2010 with SQL 2012, I am trying to get Business Connectivity Services (BCS) running but I am facing a double-hope authentication issue. Everytime I try to connect to the external BCS list created in SharePoint designer, I get the error Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. In the event viewer on the SQL server I see a login failure for an anonymous user from the SP server IP address. Background information below: I have enabled Kerberos under SharePoint Central admin. I have the following AD domain accounts: SP_Farm - main website pool SP_Services - for SharePoint services (including BCS) SQL_Engine - SQL database engine I then created the following with SetSPN: SetSPN -S http/intranet mydomain\SP_Farm SetSPN -S http/intranet.mydomain.local mydomain\SP_Farm SetSPN -S SPSvc/SPS mydomain\SP_Farm SetSPN -S MSSQLSvc/SQL1 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1:1433 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local:1433 mydomain\SQL_DatabaseEngine I then delegated the AD accounts for any authentication protocol to the following: SP_Farm - SP_Farm (http service type, intranet) SP_Farm - SQL_DatabaseEngine (MSSQLSvc, sql1) SP_Service - SP_Service (SPSvc) SP_Service - SQL_DatabaseEngine (MSSQLSvc, sql1) I have also checked the WFE is being logged on to with Kerberos, with the WFE server event log showing event ID 4624 with Kerberos authentication, this is OK. The SQL is also showing connections authenticated as Kerberos from the WFE with the following query: Select s.session_id, s.login_name, s.host_name, c.auth_scheme from sys.dm_exec_connections c inner join sys.dm_exec_sessions s on c.session_id = s.session_id Despite the above, credentials are not passed from the client through the SharePoint server to the SQL server, only the anonymous account is used. I get the following error in the WFE server for 'BusinessData' ID 8080: Could not open connection using 'data source=sql1.mydomain.local;initial catalog=MSCRM;integrated security=SSPI;pooling=true;persist security info=false' in App Domain '/LM/W3SVC/1848937658/ROOT-1-129922939694071446'. The full exception text is: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. If I set a username and password with the Secure Store Service and set the external list to use the impersonated credentials, the list works. Any ideas what I have missed and what can be tried next?

    Read the article

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Why do we need different CPU architecture for server & mini/mainframe & mixed-core?

    - by claws
    Hello, I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia. It categorizes notable CPU architectures into following categories. Embedded CPU architectures Microcomputer CPU architectures Workstation/Server CPU architectures Mini/Mainframe CPU architectures Mixed core CPU architectures I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others. Embedded CPU architecture: They are a completely new world. Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC. The above point also clarifies why do we need a separate operating system (RTOS). Workstation/Server CPU architectures I don't know what is a workstation. Someone clarify regarding the workstation. As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention. But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify? Mini/Mainframe CPU architectures Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge. Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it? Is there a new kind of operating system for this too? Why? Mixed core CPU architectures Never heard of these. If possible please keep your answer in this format: XYZ CPU architectures Purpose of XYZ Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores. Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • MSSQLSERVER Will Not Start - Event ID 913 and 1814

    - by ThaKidd
    Hello ServerFault! I need some serious help. I have a major database server down and am scratching my head at how to fix it. The server was hit by rolling black outs last week in Dallas and sense then, Microsoft SQL 2005 SP2 will not start up. I am getting the following errors (both when starting the service and while trying to execute mssqlsrv.exe -c -f -m: Event Type: Error Event Source: MSSQLSERVER Event ID: 913 Could not find database ID 3. Database may not be activated yet or may be in transition. Reissue the query once the database is available. If you do not think this error is due to a database that is transitioning its state and this error continues to occur, contact your primary support provider. Please have available for review the Microsoft SQL Server error log and any additional information relevant to the circumstances when the error occurred. and... Event Type: Information Event Source: MSSQLSERVER Event ID: 1814 Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized. I have tried to rename the tempdb.mdf to tempdb.old with no success. I have checked and have 193 GB of free hard drive space. What else might cause this problem? Could the server need a chkdsk ran on it or do I need to be looking at some area of the database server? Any help is greatly appreciated. Thank you in advance.

    Read the article

  • SQL Server rolling forward lots of transactions, what should I look at?

    - by Anthony D
    I am running SQL Server Express on a Windows XP Embedded box. It runs for a day or two, doing some transactional processing for a POS type system, and with another system pulling data out to an OLAP DB for processing. After a while, I see in the event viewer the sequence SQL Server puts out when it restarts, copy rights, command line parameters, and so on. It seems like that coincides with our OLAP process crashing. I then see that when it restarts our transaction DB, it does a recovery, pulling in 10K or so in transactions that need to be rolled forward. Does this mean SQL has crashed? I don't really see much to indicate what happened. Update 1 I noticed I have my memory limit set to 1MB per query and 2TB for the server. These are the defaults. I only have one GB in the box. We have seen SQL crash a whole box by just using all the system memory. In this case though the whole box is up when we get to it.

    Read the article

  • Shadow copy referencing invalid volume from symboliclink

    - by ccook
    I recently replaced my motherboard after the last one failed (was shorting and causing random reboots). I'm sure this was not healthy for the machine, and that a clean install would do wonders, but I'd like to fix the current install. That aside, I've been tracking down a pair of errors in the application log. Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details IVssSnapshotProvider::QueryVolumesSupportedForSnapshots(ProviderId,29,...) [hr = 0x80042302, A Volume Shadow Copy Service component encountered an unexpected error. Check the Application event log for more information. ]. Operation: Query volumes supported by this provider Context: Provider ID: {b5946137-7b9f-4925-af80-51abd60b20d5} Snapshot Context: 29 Followed by Volume Shadow Copy Service error: Unexpected error calling routine Error calling CreateFile on volume '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\'. hr = 0x80070020, The process cannot access the file because it is being used by another process. This error is reproducible at command line, creating the two event log entries C:\Windows\system32>vssadmin list volumes vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Error: The shadow copy provider had an unexpected error while trying to process the specified command. Using WinObj from Sysinternals, I have tracked down the global object. '\?\Volume{f4bda86e-049d-11e1-9255-bcaec56690a1}\' - SymbolicLink - '\Device\HarddiskVolume8' Running DISKPART, and running the command "list volume" within it lists volumes 0 through 6, there is not a HarddiskVolume8. How can I remove this reference to HarddiskVolume8, and get shadow copy up and running?

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • PXE Boot not working

    - by Nishant
    Please explain the error in this screenshot DHCP Setting: This screenshot was taken after powering off the old comp hence he server interface is shown as the wireless card - it becomes 192.168.0.1 when I connect wires and power up the old laptop to boot via PXE. My scenario is simple. An old laptop and a new laptop . A cross over cable ( that I myself made from CAT 6 cable by cutting it and connecting 4 wires as mentioned in some doc). The new laptop ( tftp server ) has a Wirelss Card ( with which I am browsing and writing this ) . And the cable is connected between laptops . TFTP server ( new laptop details ) Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::f511:3d4a:ca01:122e%16 IPv4 Address. . . . . . . . . . . : 192.168.0.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.2 Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Achilles Link-local IPv6 Address . . . . . : fe80::99b1:8ae0:9e6c:f300%11 IPv4 Address. . . . . . . . . . . : 192.168.2.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.2.1

    Read the article

  • Can't sync filesystem without reboot

    - by Fabio
    I'm having an issue with a linux server. Once a week the running mysql instance hangs and there is no way to fully stop it. If I kill it, it remains in zombie status and init does not reap its pid. The server is used for staging deployments and some internal tools, so it's not under heavy load. The only process constantly used id mysql and for this I think that it's the only process which suffer of this issue. I've searched system logs for errors and the only thing I found is this error (repeated a couple of times) in dmesg output: [706560.640085] INFO: task mysqld:31965 blocked for more than 120 seconds. [706560.640198] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [706560.640312] mysqld D ffff88032fd93f40 0 31965 1 0x00000000 [706560.640317] ffff880242a27d18 0000000000000086 ffff88031a50dd00 ffff880242a27fd8 [706560.640321] ffff880242a27fd8 ffff880242a27fd8 ffff88031e549740 ffff88031a50dd00 [706560.640325] ffff88031a50dd00 ffff88032fd947f8 0000000000000002 ffffffff8112f250 [706560.640328] Call Trace: [706560.640338] [<ffffffff8112f250>] ? __lock_page+0x70/0x70 [706560.640344] [<ffffffff816cb1b9>] schedule+0x29/0x70 [706560.640347] [<ffffffff816cb28f>] io_schedule+0x8f/0xd0 [706560.640350] [<ffffffff8112f25e>] sleep_on_page+0xe/0x20 [706560.640353] [<ffffffff816c9900>] __wait_on_bit+0x60/0x90 [706560.640356] [<ffffffff8112f390>] wait_on_page_bit+0x80/0x90 [706560.640360] [<ffffffff8107dce0>] ? autoremove_wake_function+0x40/0x40 [706560.640363] [<ffffffff8112f891>] filemap_fdatawait_range+0x101/0x190 [706560.640366] [<ffffffff81130975>] filemap_write_and_wait_range+0x65/0x70 [706560.640371] [<ffffffff8122e441>] ext4_sync_file+0x71/0x320 [706560.640376] [<ffffffff811c3e6d>] do_fsync+0x5d/0x90 [706560.640379] [<ffffffff811c40d0>] sys_fsync+0x10/0x20 [706560.640383] [<ffffffff816d495d>] system_call_fastpath+0x1a/0x1f When this happens the only way to make everything working again is a full reboot, but in order to do that I'm forced to use this command after I've manually stopped all running processes echo b > /proc/sysrq-trigger otherwise normal reboot process hangs forever. I've tracked reboots script and I've found out that also the reboot process hangs on a sync call, this one in /etc/init.d/sendsigs (I'm on ubuntu) # Flush the kernel I/O buffer before we start to kill # processes, to make sure the IO of already stopped services to # not slow down the remaining processes to a point where they # are accidentily killed with SIGKILL because they did not # manage to shut down in time. sync I'm almost sure that the cause of this is an hardware issue (the RAID controller???) also because I've other two machines with the same hardware and software configuration and they don't suffer of this, but I can't find any hint in syslog or dmesg. I've also installed smartmontools and mcelog packages but none of them did report any issue. What can I do to track the cause of this issue? Today is happened again, here is the status of system after triggering a reboot init---console-kit-dae---64*[{console-kit-dae}] +-dbus-daemon +-mcelog +-mysqld---{mysqld} +-newrelic-daemon---newrelic-daemon---11*[{newrelic-daemon}] +-ntpd +-polkitd---{polkitd} +-python3 +-rpc.idmapd +-rpc.statd +-rpcbind +-sh---rc---S20sendsigs---sync +-smartd +-snmpd +-sshd---sshd---zsh---sudo---zsh---pstree +-sshd---sshd---zsh---sudo---zsh And here is the status of sync process # ps aux | grep sync root 3637 0.1 0.0 4352 372 ? D 05:53 0:00 sync i.e. Uninterruptible sleep... Hardware specs as reported by lshw I think the raid controller is a fake raid. I usually don't deal with hardware (and for the record I don't have physical access to it) description: Computer product: X7DBP () vendor: Supermicro version: 0123456789 serial: 0123456789 width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=normal frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=53D19F64-D663-A017-8922-0030487C1FEE *-core description: Motherboard product: X7DBP vendor: Supermicro physical id: 0 version: PCB Version serial: 0123456789 *-firmware description: BIOS vendor: Phoenix Technologies LTD physical id: 0 version: 6.00 date: 05/29/2007 size: 106KiB capacity: 960KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect edd int13floppy2880 acpi usb ls120boot zipboot biosbootspecification *-storage description: RAID bus controller product: 631xESB/632xESB SATA RAID Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 09 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list configuration: driver=ahci latency=0 resources: irq:19 ioport:18a0(size=8) ioport:1874(size=4) ioport:1878(size=8) ioport:1870(size=4) ioport:1880(size=32) memory:d8500400-d85007ff

    Read the article

< Previous Page | 799 800 801 802 803 804 805 806 807 808 809 810  | Next Page >