Search Results

Search found 10488 results on 420 pages for 'rewrite module'.

Page 269/420 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Sign an OpenSSL .CSR with Microsoft Certificate Authority

    - by kce
    I'm in the process of building a Debian FreeRadius server that does 802.1x authentication for domain members. I would like to sign my radius server's SSL certificate (used for EAP-TLS) and leverage the domain's existing PKI. The radius server is joined to domain via Samba and has a machine account as displayed in Active Directory Users and Computers. The domain controller I'm trying to sign my radius server's key against does not have IIS installed so I can't use the preferred Certsrv webpage to generate the certificate. The MMC tools won't work as it can't access the certificate stores on the radius server because they don't exist. This leaves the certreq.exe utility. I'm generating my .CSR with the following command: openssl req -nodes -newkey rsa:1024 -keyout server.key -out server.csr The resulting .CSR: ******@mis-ke-lnx:~/G$ openssl req -text -noout -in mis-radius-lnx.csr Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=Alaska, L=CITY, O=ORG, OU=DEPT, CN=ME/emailAddress=MYEMAIL Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a8:b3:0d:4b:3f:fa:a4:5f:78:0c:24:24:23:ac: cf:c5:28:af:af:a2:9b:07:23:67:4c:77:b5:e8:8a: 08:2e:c5:a3:37:e1:05:53:41:f3:4b:e1:56:44:d2: 27:c6:90:df:ae:3b:79:e4:20:c2:e4:d1:3e:22:df: 03:60:08:b7:f0:6b:39:4d:b4:5e:15:f7:1d:90:e8: 46:10:28:38:6a:62:c2:39:80:5a:92:73:37:85:37: d3:3e:57:55:b8:93:a3:43:ac:2b:de:0f:f8:ab:44: 13:8e:48:29:d7:8d:ce:e2:1d:2a:b7:2b:9d:88:ea: 79:64:3f:9a:7b:90:13:87:63 Exponent: 65537 (0x10001) Attributes: a0:00 Signature Algorithm: sha1WithRSAEncryption 35:57:3a:ec:82:fc:0a:8b:90:9a:11:6b:56:e7:a8:e4:91:df: 73:1a:59:d6:5f:90:07:83:46:aa:55:54:1c:f9:28:3e:a6:42: 48:0d:6b:da:58:e4:f5:7f:81:ee:e2:66:71:78:85:bd:7f:6d: 02:b6:9c:32:ad:fa:1f:53:0a:b4:38:25:65:c2:e4:37:00:16: 53:d2:da:f2:ad:cb:92:2b:58:15:f4:ea:02:1c:a3:1c:1f:59: 4b:0f:6c:53:70:ef:47:60:b6:87:c7:2c:39:85:d8:54:84:a1: b4:67:f0:d3:32:f4:8e:b3:76:04:a8:65:48:58:ad:3a:d2:c9: 3d:63 I'm trying to submit my certificate using the following certreq.exe command: certreq -submit -attrib "CertificateTemplate:Machine" server.csr I receive the following error upon doing so: RequestId: 601 Certificate not issued (Denied) Denied by Policy Module The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Certificate Request Processor: The DNS name is unavailable and cannot be added to the Subject Alternate name. 0x8009480f (-2146875377) Denied by Policy Module My certificate authority has the following certificate templates available. If I try to submit by certreq.exe using "CertificiateTemplate:Computer" instead of "CertificateTemplate:Machine" I get an error reporting that "the requested certificate template is not supported by this CA." My google-foo has failed me so far on trying to understand this error... I feel like this should be a relatively simple task as X.509 is X.509 and OpenSSL generates the .CSRs in the required PKCS10 format. I can't be only one out there trying to sign a OpenSSL generated key on a Linux box with a Windows Certificate Authority, so how do I do this (perferably using the off-line certreq.exe tool)?

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • RHEL - NFS4: Mounted/Exported as rw, user write permission denied

    - by brendanmac
    Hello, I have nfs4 configured between a RHEL 5.3 server (charlie) and a RHEL 5.4 client (simcom1). The machines are configured to authenticate users via kerberos by a Windows Server 2008 active directory machine called "alpha." Alpha also serves as a dns and dhcp machine for the local network. I notice that when a user logs in to a RHEL machine for the first time they are issued a unique uid to that machine; The first user to log on gets 10001. So, what I see is that users between simcom1 and charlie have different UIDs. When a user does an 'ls -la' command from within an nfs4 mount I would have thought that the usernames in the owner column would indicate 'nobody' or at least the wrong user name - since UIDs are different between the machines for each user, and not all users have logged into each machine. However, the simcom1 is able to resolve usernames in an 'ls -la' executed on files residing on charlie via nfs4 correctly. Most troubling is that users are unable to write to files across the nfs mount. The server, charlie, has the root directory exported as rw. The client, simcom1, mounts the export as rw. My configurations are shown below. My question is, how do I configure the RHEL machines to allow users to write files across nfs4 that is already mounted as read/write? [root@charlie ~]# more /etc/exports / 10.100.0.0/16(rw,no_root_squash,fsid=0) [root@charlie ~]#cat /etc/sysconfig/nfs # # Define which protocol versions mountd # will advertise. The values are "no" or "yes" # with yes being the default #MOUNTD_NFS_V1="no" #MOUNTD_NFS_V2="no" #MOUNTD_NFS_V3="no" # # # Path to remote quota server. See rquotad(8) #RQUOTAD="/usr/sbin/rpc.rquotad" # Port rquotad should listen on. #RQUOTAD_PORT=875 # Optinal options passed to rquotad #RPCRQUOTADOPTS="" # # # TCP port rpc.lockd should listen on. #LOCKD_TCPPORT=32803 # UDP port rpc.lockd should listen on. #LOCKD_UDPPORT=32769 # # # Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) # Turn off v2 and v3 protocol support #RPCNFSDARGS="-N 2 -N 3" # Turn off v4 protocol support #RPCNFSDARGS="-N 4" # Number of nfs server processes to be started. # The default is 8. RPCNFSDCOUNT=8 # Stop the nfsd module from being pre-loaded #NFSD_MODULE="noload" # # # Optional arguments passed to rpc.mountd. See rpc.mountd(8) #STATDARG="" #RPCMOUNTDOPTS="" # Port rpc.mountd should listen on. #MOUNTD_PORT=892 # # # Optional arguments passed to rpc.statd. See rpc.statd(8) #RPCIDMAPDARGS="" # # Set to turn on Secure NFS mounts. SECURE_NFS="no" # Optional arguments passed to rpc.gssd. See rpc.gssd(8) #RPCGSSDARGS="-vvv" # Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8) #RPCSVCGSSDARGS="-vvv" # Don't load security modules in to the kernel #SECURE_NFS_MODS="noload" # # Don't load sunrpc module. #RPCMTAB="noload" # [root@simcom1 ~]# cat /etc/fstab --start snip-- charlie:/home /usr/local/dev/charlie nfs4 rw,nosuid, 0 0 --end snip-- [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# touch file touch: cannot touch 'file': Permission denied [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# su Password: [root@simcom1 /usr/local/dev/charlie/brendanmac]# touch file [root@simcom1 /usr/local/dev/charlie/brendanmac]# ls -la file -rw------- 1 root root 0 May 26 10:43 file Thank you for your assistance, Brendan

    Read the article

  • Windows 7 BSOD - ntoskrnl?

    - by Ken Mason
    2 new HP Pavilion notebooks with 7 Home Premium pre-loaded with Norton. My first act was to use the Norton Removal Tool and load ZoneAlarm free and AVG Free. Frequent random BSOD's ever since...I found my way into Debug and have had various reports regarding ntoskrnl, depending on the status of symbols. It's been many years since I played with (DOS 3.x) debug, so this has been a considerable fumble. Excerpts follow and any insights would be greatly appreciated, as I am not a developer: ADDITIONAL_DEBUG_TEXT: Use '!findthebuild' command to search for the target build information. If the build information is available, run '!findthebuild -s ; .reload' to set symbol path and load symbols. MODULE_NAME: nt FAULTING_MODULE: fffff8000305d000 nt DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt+0x70600 fffff80004d6fd30 000000000000007f : 0000000000000008 0000000080050033 00000000000006f8 fffff80003095e58 : nt+0x6fb69 fffff80004d6fd38 0000000000000008 : 0000000080050033 00000000000006f8 fffff80003095e58 0000000000000000 : 0x7f fffff80004d6fd40 0000000080050033 : 00000000000006f8 fffff80003095e58 0000000000000000 0000000000000000 : 0x8 fffff80004d6fd48 00000000000006f8 : fffff80003095e58 0000000000000000 0000000000000000 0000000000000000 : 0x80050033 fffff80004d6fd50 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : 0x6f8 fffff80004d6fd58 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt+0x38e58 STACK_COMMAND: kb FOLLOWUP_IP: nt+70600 fffff800`030cd600 48894c2408 mov qword ptr [rsp+8],rcx SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: nt+70600 FOLLOWUP_NAME: MachineOwner IMAGE_NAME: ntoskrnl.exe BUCKET_ID: WRONG_SYMBOLS Followup: MachineOwner ...................................................................... 0: kd !lmi nt Loaded Module Info: [nt] Module: ntkrnlmp Base Address: fffff8000305d000 Image Name: ntkrnlmp.exe Machine Type: 34404 (X64) Time Stamp: 4b88cfeb Sat Feb 27 00:55:23 2010 Size: 5dc000 CheckSum: 545094 Characteristics: 22 perf Debug Data Dirs: Type Size VA Pointer CODEVIEW 25, 19c65c, 19bc5c RSDS - GUID: {7E9A3CAB-6268-45DE-8E10-816E3080A3B7} Age: 2, Pdb: ntkrnlmp.pdb CLSID 4, 19c658, 19bc58 [Data not mapped] Image Type: FILE - Image read successfully from debugger. ntkrnlmp.exe Symbol Type: PDB - Symbols loaded successfully from symbol server. d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb Load Report: public symbols , not source indexed d:\debugsymbols\ntkrnlmp.pdb\7E9A3CAB626845DE8E10816E3080A3B72\ntkrnlmp.pdb 0: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff80003095e58 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT PROCESS_NAME: System CURRENT_IRQL: 2 LAST_CONTROL_TRANSFER: from fffff800030ccb69 to fffff800030cd600 STACK_TEXT: fffff80004d6fd28 fffff800030ccb69 : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffff80004d6fd30 fffff800030cb032 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x69 fffff80004d6fe70 fffff80003095e58 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb2 fffff880089efc60 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!SeAccessCheckFromState+0x58 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b2 fffff800`030cb032 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b2 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4b88cfeb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b2 Followup: MachineOwner I tried running Rootkit Revealer but I don't think it works on x64 systems. Similarly Blacklight seems to have aged off. I'm running Sophos Anti-Rootkit now. So far so good...

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • Memcache on ubuntu server lucid and ruby 1.9.1

    - by Thiago
    Hi there, I'm trying to set up a memcache server on the above setup. I'm getting the following error: /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:443:in `load_missing_constant': uninitialized constant MemCache (NameError) from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:92:in `const_missing' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:18:in `<class:MemcacheQueueClient>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:14:in `<module:Clients>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:13:in `<module:Workling>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:12:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/client_runner.rb:2:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/starling_runner.rb:1:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote.rb:3:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `block in load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:379:in `load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:259:in `require_or_load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:425:in `load_missing_constant' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /root/voicegateway/config/environments/development.rb:20:in `block in load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `eval' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `block in load_environment' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:379:in `load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:137:in `process' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /root/voicegateway/config/environment.rb:9:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/commands/server.rb:84:in `<top (required)>' from ./server:3:in `require' from ./server:3:in `<main>' But memcache-client 1.8.3 is on the gem list. What's the problem?

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • Memcache on ubuntu server lucid and ruby 1.9.1

    - by Thiago
    I'm trying to set up a memcache server on the above setup. I'm getting the following error: /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:443:in `load_missing_constant': uninitialized constant MemCache (NameError) from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:92:in `const_missing' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:18:in `<class:MemcacheQueueClient>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:14:in `<module:Clients>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:13:in `<module:Workling>' from /root/voicegateway/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:12:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/client_runner.rb:2:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote/runners/starling_runner.rb:1:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /root/voicegateway/vendor/plugins/workling/lib/workling/remote.rb:3:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:380:in `block in load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:379:in `load_file' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:259:in `require_or_load' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:425:in `load_missing_constant' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:80:in `const_missing_with_dependencies' from /root/voicegateway/config/environments/development.rb:20:in `block in load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `eval' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:386:in `block in load_environment' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:379:in `load_environment' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:137:in `process' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /root/voicegateway/config/environment.rb:9:in `<top (required)>' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `block in require' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' from /var/lib/gems/1.9.1/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' from /var/lib/gems/1.9.1/gems/rails-2.3.8/lib/commands/server.rb:84:in `<top (required)>' from ./server:3:in `require' from ./server:3:in `<main>' But memcache-client 1.8.3 is on the gem list. What's the problem?

    Read the article

  • Application Does Not Start in Windows 7

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fresh from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried running sfc /scannow from the command line (with administrator privileges), shutting down the Spybot Resident, and disabling the firewall and virus scanner. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Faulting application name: SSDFresh.exe, version: 1.0.0.0, time stamp: 0x4f2a45d8 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0xc0000005 Fault offset: 0x000007ff0016dbba Faulting process id: 0x994 Faulting application start time: 0x01cd11fd9fe978df Faulting application path: C:\Program Files (x86)\SSD Fresh\SSDFresh.exe Faulting module path: unknown Report Id: dfeed551-7df0-11e1-a2c7-002522c47ec0 Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Application: SSDFresh.exe Framework Version: v4.0.30319 Description: The process was terminated due to an unhandled exception. Exception Info: System.NullReferenceException Stack: at AbBugReporter.BugForm.InitLanguage() at AbBugReporter.BugForm..ctor(AbFlexTrans.LanguageInfo, AbBugReporter.BugReportManager, Boolean) at AbBugReporter.BugReportManager.Show(System.Exception) at SSDFresh.App.App_DispatcherUnhandledException(System.Object, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs) at System.Windows.Threading.Dispatcher.CatchException(System.Exception) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(System.Object, System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.WrappedInvoke(System.Delegate, System.Object, Int32, System.Delegate) at System.Windows.Threading.Dispatcher.InvokeImpl(System.Windows.Threading.DispatcherPriority, System.TimeSpan, System.Delegate, System.Object, Int32) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr, Int32, IntPtr, IntPtr) at MS.Win32.UnsafeNativeMethods.DispatchMessage(System.Windows.Interop.MSG ByRef) at System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) at System.Windows.Application.RunInternal(System.Windows.Window) at System.Windows.Application.Run() at SSDFresh.App.Main() Error 4/2/2012 11:35:39 PM SideBySide 59 None Activation context generation failed for "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe".Error in manifest or policy file "C:\Windows\Microsoft.NET\Framework64\v4.0.30319\csc.exe.Config" on line 0. Invalid Xml syntax. Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

  • Windows 7 / Ubuntu Dualboot GRUB Problem.

    - by Tek
    I'd like to first say ahead of time that I'm running a RAID-0 Setup. 1.First of all, I'm glad Ubuntu 9.10 installed flawlessly and detected my RAID-0 setup just fine. The issue I'm having now is that I already had Windows 7 installed and made a small 12GB partition for Linux/Swap. I grabbed EasyBCD 2.0 to edit the W7 bootloader and configured it to use dual boot Grub2 because before it didn't even show the option for Ubuntu. The bootloader points to a file made in the windows directory made by EasyBCD called "C:\NST\AutoNeoGrub0.mbr" which is what I'm guessing grub is booting from. After that I got the option for booting Ubuntu. The problem is that it's sending me to the Grub prompt (probably because it's pointing to \NST|AutoNeoGrub0.mbr?), at first I didn't know what to do but I researched and have to type grub commands to manually boot into Ubuntu Linux. Ex: grubroot (hd0,4) grubkernel /boot/vmlinuz-2.6... root=/dev/disk/by-uuid/24624-2424... grubinitrd boot/initrd.img-2.6... grubboot After all that Ubuntu boots just fine, but how do I fix it permanently? Do I need to edit the bootloader manually (since Easy BCD "autoconfigures")? Some insight on this would rock! Also, it sucks to type the actual uuid since it's REALLY long. I tried getting the name of the drive via fdisk -l but since it's raid 0 I'm guessing I can't do that. How can I get a shorter name of the drive? like /dev/sda, /dev/sdb etc? I've also tried to update to the latest GRUB and I got this: Creating config file /etc/default/grub with new version Generating core.img error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' grub-probe: error: no mapping exists for nvidia_dbedfcca5' Auto-detection of a filesystem module failed. Please specify the module with the option--modules' explicitly. dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of grub2: grub2 depends on grub-pc; however: Package grub-pc is not configured yet. dpkg: error processing grub2 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. E: Sub-process /usr/bin/dpkg returned an error code (1) I've also tried: b@dnb:~$ sudo update-grub error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.31-14-generic Found initrd image: /boot/initrd.img-2.6.31-14-generic error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/mapper/nvidia_dbedfcca1 error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca1' done To no avail. Any idea what I can do to fix this mess? :( Edit: This is my disk configuration. b@dnb:~$ sudo df -l Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/nvidia_dbedfcca5 12302232 2744788 8932520 24% / udev 1030288 268 1030020 1% /dev none 1030288 964 1029324 1% /dev/shm none 1030288 92 1030196 1% /var/run none 1030288 0 1030288 0% /var/lock none 1030288 0 1030288 0% /lib/init/rw /dev/sr0 706532 706532 0 100% /media/cdrom0 Note: /dev/mapper/nvidia_dbedfcca5 is my Linux boot partition

    Read the article

  • apache2.2 + php5 , process never die and stay blocked to LOCK_SH

    - by Givre
    Server version: Apache/2.2.22 (Unix) Server built: Mar 28 2012 16:31:45 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.4.1 Compiled using: APR 1.4.6, APR-Util 1.4.1 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/opt/apache2" -D SUEXEC_BIN="/opt/apache2/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Php5.2.17. Using mod_php5 as a DSO module compiled Problem: On shared webhosting, a lot of apache2 process never stop or die and they waiting as long as apache2 restart. Strace of one of theses process: access("tmp/meta_cache.txt", F_OK) = 0 getcwd("/home/exemple.com/htdocs"..., 4096) = 34 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 getcwd("/home/exemple.com/htdocs"..., 4096) = 34 lstat("/var", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/var/www", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 lstat("/home", {st_mode=S_IFDIR|0755, st_size=1715, ...}) = 0 lstat("/home/exemple.com", {st_mode=S_IFDIR|0755, st_size=16, ...}) = 0 lstat("/home/exemple.com/htdocs", {st_mode=S_IFDIR|0770, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp", {st_mode=S_IFDIR|0777, st_size=51, ...}) = 0 lstat("/home/exemple.com/htdocs/tmp/meta_cache.txt", {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 open("/home/exemple.com/htdocs/tmp/meta_cache.txt", O_RDONLY) = 10905 fstat(10905, {st_mode=S_IFREG|0666, st_size=8901, ...}) = 0 lseek(10905, 0, SEEK_CUR) = 0 flock(10905, LOCK_SH) = The process never die, and stay like this. All files are on NFS V3 I'dont know how to solve this problem or find more informations. The effect is that all apache2 process become used and apache2 crash totaly . Thanks for you help.

    Read the article

  • PHP.ini does not load

    - by Jonathan Park
    Ok this is probably just me not knowing enough about php but here it goes. I'm on Ubuntu Hardy. I have a custom compiled version of PHP which I have compiled with these parameters. ./configure --enable-soap --with-zlib --with-mysql --with-apxs2=[correct path] --with-config-file-path=[correct path] --with-mysqli --with-curlwrappers --with-curl --with-mcrypt I have used the command pecl install pecl_http to install the http.so extension. It is in the correct module directory for my php.ini. My php.ini is loading and I can change things within the ini and effect php. I have included the extension=http.so line in my php.ini. That worked fine. Until I added these compilation options in order to add imap --with-openssl --with-kerberos --with-imap --with-imap-ssl Which failed because I needed the c-client library which I fixed by apt-get install libc-client-dev After which php compiles fine and I have working imap support, woo. HOWEVER, now all my calls to HttpRequest which is part of the pecl_http extention in http.so result in Fatal error: Class 'HttpRequest' not found errors. I figure the http.so module is no longer loading for one reason or another but I cannot find any errors showing the reason. You might say "Have you tried undoing the new imap setup?" To which I will answer. Yes I have. I directly undid all my config changes and uninstalled the c-client library and I still can't get it to work. I thought that's weird... I have made no changes that would have resulted in this issue. After looking at that I have also discovered that not only is the http extension no longer loading but all my extensions loaded via php.ini are no longer loading. Can someone at least give me some further debugging steps? So far I have tried enabling all errors including startup errors in my php.ini which works for other errors, but I'm not seeing any startup errors either on command line or via apache. And yet again the php.ini appears to be being parsed given that if I run php_info() I get settings that are in the php.ini. Edit it appears that only some of the php.ini settings are being listened to. Is there a way to test my php.ini? Edit Edit It appears I am mistaken again and the php.ini is not being loaded at all any longer. However, If I run php_info() I get that it's looking for my php.ini in the correct location. Edit Edit Edit My config is at the config file path location below but it says no config file loaded. WTF Permission issue? It is currently 644 so everyone should be able to read it if not write it. I tried making it 777 and that didn't work. Configuration File (php.ini) Path /etc/php.ini Loaded Configuration File (none) Edit Edit Edit Edit By loading the ini on the command line using the -c command I am able to run my files and using -m shows that my modules load So nothing is wrong with the php.ini

    Read the article

  • Bugzilla : No SASL mechanism found

    - by niteshsinha
    I am using Bugzilla on windows 7. I am using the unofficial Bugzilla installer. I followed the steps accordingly and gave valid credentials wherever required. I open Bugzilla and try to create a new account , but i get the following error. Software error: No SASL mechanism found at C:/Program Files/Bugzilla/perl/perl/site/lib/Authen/SASL.pm line 77 at C:/Program Files/Bugzilla/perl/perl/lib/Net/SMTP.pm line 143 i ran checksetup.pl and found that Authen::SASL and SMTP both are available on my machine. The output of checksetup.pl is as follows. * This is Bugzilla 3.6.3 on perl 5.10.1 * Running on Win7 Build 7600 Checking perl modules... Checking for CGI.pm (v3.33) ok: found v3.49 Checking for Digest-SHA (any) ok: found v5.48 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.53 Checking for DateTime-TimeZone (v0.79) ok: found v1.10 Checking for DBI (v1.41) ok: found v1.609 Checking for Template-Toolkit (v2.22) ok: found v2.22 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.861) ok: found v1.903 Checking for Email-MIME-Encodings (v1.313) ok: found v1.313 Checking for Email-MIME-Modifier (v1.442) ok: found v1.903 Checking for URI (any) ok: found v1.52 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.16.1 Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for DBD-Oracle (v1.19) not found The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v2.1) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for perl-ldap (any) ok: found v0.39 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.710.10 Checking for JSON-RPC (any) ok: found v0.95 Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.40) ok: found v3.64 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * *********************************************************************** * Note For Windows Users * *********************************************************************** * In order to install the modules listed below, you first have to run * * the following command as an Administrator: * * * * ppm repo add theory58S http://cpan.uwinnipeg.ca/PPMPackages/10xx/ * * * Then you have to do (also as an Administrator): * * * * ppm repo up theory58S * * * * Do that last command over and over until you see "theory58S" at the * * top of the displayed list. * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for MySQL (v4.1.2) ok: found v5.1.44-community-log Removing existing compiled templates... Precompiling templates...done. Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL. Press any key to continue . . . Please tell me what should i do. Please note: i am running behind a corporate proxy , SSL/TLS is not used internally but i am giving the smtpUser and smtpPass also.

    Read the article

  • Setup access to SAS RAID drives with NTFS partitions on CentOS Machine

    - by Quanano
    We have a Dell Poweredge 2900 system with Adaptec 39320A SCSI CONTROLLER CARD and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on the other raid array with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log

    Read the article

  • Windows Service SearchIndexer.exe Crashes on Indexing

    - by Josh Jay
    Relevant Specs: Windows 7 Professional 64-bit SP1 Outlook 2010 Version 14.0.7116.5000 (32-bit) Original Symptom: In outlook, I attempted to search for an email but nothing ever returned and the indicator kept going like it was searching. Attempted Resolutions: I investigated the search options and with some research noticed the Windows Service "Windows Search" (SearchIndexer.exe) was not running. I attempted to start it but I receive this error message: "Windows could not start the Windows Search service on Local Computer. Error 1067: The process terminated unexpectedly." The Event Viewer gives this error entry: Log Name: Application Source: Application Error Date: 6/3/2014 11:02:05 AM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ***REMOVED FOR POST*** Description: Faulting application name: SearchIndexer.exe, version: 7.0.7601.17610, time stamp: 0x4dc0d019 Faulting module name: KERNELBASE.dll, version: 6.1.7601.18229, time stamp: 0x51fb1677 Exception code: 0xc0000005 Fault offset: 0x000000000000940d Faulting process id: 0x6a0 Faulting application start time: 0x01cf7f3cc83757c6 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id: 06424160-eb30-11e3-9555-843a4b07b336 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2014-06-03T15:02:05.000000000Z" /> <EventRecordID>602923</EventRecordID> <Channel>Application</Channel> <Computer>M6700-12011.ncaa.org</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7601.17610</Data> <Data>4dc0d019</Data> <Data>KERNELBASE.dll</Data> <Data>6.1.7601.18229</Data> <Data>51fb1677</Data> <Data>c0000005</Data> <Data>000000000000940d</Data> <Data>6a0</Data> <Data>01cf7f3cc83757c6</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\system32\KERNELBASE.dll</Data> <Data>06424160-eb30-11e3-9555-843a4b07b336</Data> </EventData> </Event> The regular windows search (from start menu) works fine, and if I reboot the machine the service starts up OK but as soon as it kicks off when I let the machine idle for long enough it crashes (same Event Viewer entry). We also tried the Microsoft Utility to no avail. Has anyone seen this issue before?

    Read the article

  • PHP, Apache and curl: Differences between Windows and Linux?

    - by beginner_
    I'm trying to run my php App on Ubuntu Server 11.10. This App works fine under Apache + PHP in windows. I have other applications that I can simply copy&paste between the 2 OS and they work on both. (These don't use cURL). However this one uses the php library tonic (RESTful webservices) and makes us of php cURL module. The issue is I'm not getting an error message which makes it impossible to find the issue. I (must) use NTLM authentication and this is done with AuthenNTLM Apache Module: Order allow,deny Allow from all PerlAuthenHandler Apache2::AuthenNTLM AuthType ntlm AuthName "Protected Access" require valid-user PerlAddVar ntdomain "domainName server" PerlSetVar defaultdomain domainName PerlSetVar ntlmsemtimeout 2 PerlSetVar ntlmdebug 1 PerlSetVar splitdomainprefix 0 All files that cURL needs to fetch override AuthenNTLM authentication: order deny,allow deny from all allow from 127.0.0.1 Satisfy any Since these files are only fectehd by cURL from same server, access can be limited to localhost. Possible issues are: NTLM auth isn't overridden for files requested through cURL (even though AllowOverride All is set) curl works differently on linux $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIE, $strCookie); curl_setopt($ch, CURLOPT_URL, $baseUrl . $queryString); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $html = curl_exec($ch); curl_close($ch); other? Apache log says: [error] Bad/Missing NTLM/Basic Authorization Header for /myApp/webservice/local/viewList.php But this directory should override NTLM authentication using curl command line from windows to access same resource i get: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html> <head> <title>406 Not Acceptable</title> </head> <body> <h1>Not Acceptable</h1> <p>An appropriate representation of the requested resource /myApp/webservice/myResource could not be found on this server.</p> Available variants: <ul> <li><a href="myResource.php">myResource.php</a> , type application/x-httpd-php</li> </ul> <hr> <address>Apache/2.2.20 (Ubuntu) Server at localhost Port 80</address> </body> </html> Note: This is duplicate from http://stackoverflow.com/questions/9821979/php-curl-on-linux-what-is-the-difference-to-curl-on-windows Is it was suggested I post it here. EDIT: Please see Ubuntu Server: Apache2 seems to attach .php to URI as I discovered why it does not work but need help so the issue does not occur anymore. ANSWER: The issue is the default Apache configuration on Ubuntu: Options Indexes FollowSymLinks MultiViews MultiViews is changing request_uri from myResource to myResource.php. Solutions: disable MultiViews in .htaccess: Options -MultiViews remove MultiViews from default config rename the file as example to myResourceClass I chose last option because that should work regardless of configuration and I only have 3 such files so the change took about 30 secs...

    Read the article

  • I have finally traded my Blackberry in for a Droid!

    - by Bob Porter
    Over the years I have used a number of different types of phones. Windows Mobile, Blackberry, Nokia, and now Android. Until the Blackberry, which was my last phone (and I still have one issued from my office) I had never found a phone that “just worked” especially with email and messaging. The Blackberry did, and does, excel at those functions. My last personal phone was a Storm 1 which was Blackberry’s first touch screen phone. The Storm 2 was an improved version that fixed some screen press detection issues from the first model and it added Wifi. Over the last few years I have watched others acquire and fall in love with their ‘Droid’s including a number of iPhone users which surprised me. Our office has until recently only supported Blackberry phones, adding iPhones within the last year or so. When I spoke with our internal telecom folks they confirmed they were evaluating Android phones, but felt they still were not secure enough out of the box for corporate use and SOX compliance. That being said, as a personal phone, the Droid Rocks! I am impressed with its speed, the number of apps available, and the overall design. It is not as “flashy” as an iPhone but it does everything that I care about and more. The model I bought is the Motorola Droid 2 Global from Verizon. It is currently running Android 2.2 for it’s OS, 2.3 is just around the corner. It has 8 gigs of internal flash memory and can handle up to a 32 gig SDCard. (I currently have 2 8 gig cards, one for backups, and have ordered a 16 gig card!) Being a geek at heart, I “rooted” the phone which means gained superuser access to the OS on the phone. And opens a number of doors for further modifications down the road. Also being a geek meant I have already setup a development environment and built and deployed the obligatory “Hello Droid” application. I will be writing of my development experiences with this new platform here often, to start off I thought I would share my current application list to give you an idea what I am using. Zedge: http://market.android.com/details?id=net.zedge.android XDA: http://market.android.com/details?id=com.quoord.tapatalkxda.activity WRAL.com: http://market.android.com/details?id=com.mylocaltv.wral Wireless Tether: http://market.android.com/details?id=android.tether Winamp: http://market.android.com/details?id=com.nullsoft.winamp Win7 Clock: http://market.android.com/details?id=com.androidapps.widget.toggles.win7 Wifi Analyzer: http://market.android.com/details?id=com.farproc.wifi.analyzer WeatherBug: http://market.android.com/details?id=com.aws.android Weather Widget Forecast Addon: http://market.android.com/details?id=com.androidapps.weather.forecastaddon Weather & Toggle Widgets: http://market.android.com/details?id=com.androidapps.widget.weather2 Vlingo: http://market.android.com/details?id=com.vlingo.client VirtualTENHO-G: http://market.android.com/details?id=jp.bustercurry.virtualtenho_g Twitter: http://market.android.com/details?id=com.twitter.android TweetDeck: http://market.android.com/details?id=com.thedeck.android.app Tricorder: http://market.android.com/details?id=org.hermit.tricorder Titanium Backup PRO: http://market.android.com/details?id=com.keramidas.TitaniumBackupPro Titanium Backup: http://market.android.com/details?id=com.keramidas.TitaniumBackup Terminal Emulator: http://market.android.com/details?id=jackpal.androidterm Talking Tom Free: http://market.android.com/details?id=com.outfit7.talkingtom Stock Blue: http://market.android.com/details?id=org.adw.theme.stockblue ST: Red Alert Free: http://market.android.com/details?id=com.oldplanets.redalertwallpaper ST: Red Alert: http://market.android.com/details?id=com.oldplanets.redalertwallpaperplus Solitaire: http://market.android.com/details?id=com.kmagic.solitaire Skype: http://market.android.com/details?id=com.skype.raider Silent Time Lite: http://market.android.com/details?id=com.QuiteHypnotic.SilentTime ShopSavvy: http://market.android.com/details?id=com.biggu.shopsavvy Shopper: http://market.android.com/details?id=com.google.android.apps.shopper Shiny clock: http://market.android.com/details?id=com.androidapps.clock.shiny ShareMyApps: http://market.android.com/details?id=com.mattlary.shareMyApps Sense Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.senseglassadwtheme ROM Manager: http://market.android.com/details?id=com.koushikdutta.rommanager Roboform Bookmarklet Installer: http://market.android.com/details?id=roboformBookmarkletInstaller.android.com RealCalc: http://market.android.com/details?id=uk.co.nickfines.RealCalc Package Buddy: http://market.android.com/details?id=com.psyrus.packagebuddy Overstock: http://market.android.com/details?id=com.overstock OMGPOP Toggle: http://market.android.com/details?id=com.androidapps.widget.toggle.omgpop OI File Manager: http://market.android.com/details?id=org.openintents.filemanager nook: http://market.android.com/details?id=bn.ereader MyAtlas-Google Maps Navigation ext: http://market.android.com/details?id=com.adaptdroid.navbookfree3 MSN Droid: http://market.android.com/details?id=msn.droid.im Matrix Live Wallpaper: http://market.android.com/details?id=com.jarodyv.livewallpaper.matrix LogMeIn: http://market.android.com/details?id=com.logmein.ignitionpro.android Liveshare: http://market.android.com/details?id=com.cooliris.app.liveshare Kobo: http://market.android.com/details?id=com.kobobooks.android Instant Heart Rate: http://market.android.com/details?id=si.modula.android.instantheartrate IMDb: http://market.android.com/details?id=com.imdb.mobile Home Plus Weather: http://market.android.com/details?id=com.androidapps.widget.skin.weather.homeplus Handcent SMS: http://market.android.com/details?id=com.handcent.nextsms H7C Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skin.h7c GTasks: http://market.android.com/details?id=org.dayup.gtask GPS Status: http://market.android.com/details?id=com.eclipsim.gpsstatus2 Google Voice: http://market.android.com/details?id=com.google.android.apps.googlevoice Google Sky Map: http://market.android.com/details?id=com.google.android.stardroid Google Reader: http://market.android.com/details?id=com.google.android.apps.reader GoMarks: http://market.android.com/details?id=com.androappsdev.gomarks Goggles: http://market.android.com/details?id=com.google.android.apps.unveil Glossy Black Weather: http://market.android.com/details?id=com.androidapps.widget.weather.skin.glossyblack Fox News: http://market.android.com/details?id=com.foxnews.android Foursquare: http://market.android.com/details?id=com.joelapenna.foursquared FBReader: http://market.android.com/details?id=org.geometerplus.zlibrary.ui.android Fandango: http://market.android.com/details?id=com.fandango Facebook: http://market.android.com/details?id=com.facebook.katana Extensive Notes Pro: http://market.android.com/details?id=com.flufflydelusions.app.extensive_notes_donate Expense Manager: http://market.android.com/details?id=com.expensemanager Espresso UI (LightShow w/ Slide): http://market.android.com/details?id=com.jaguirre.slide.lightshow Engadget: http://market.android.com/details?id=com.aol.mobile.engadget Earth: http://market.android.com/details?id=com.google.earth Drudge: http://market.android.com/details?id=com.iavian.dreport Dropbox: http://market.android.com/details?id=com.dropbox.android DroidForums: http://market.android.com/details?id=com.quoord.tapatalkdrodiforums.activity DroidArmor ADW: http://market.android.com/details?id=mobi.addesigns.droidarmorADW Droid Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.skins.white Droid 2 Bootstrapper: http://market.android.com/details?id=com.koushikdutta.droid2.bootstrap doubleTwist: http://market.android.com/details?id=com.doubleTwist.androidPlayer Documents To Go: http://market.android.com/details?id=com.dataviz.docstogo Digital Clock Widget: http://market.android.com/details?id=com.maize.digitalClock Desk Home: http://market.android.com/details?id=com.cowbellsoftware.deskdock Default Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skins.defaultclock Daily Expense Manager: http://market.android.com/details?id=com.techahead.ExpenseManager ConnectBot: http://market.android.com/details?id=org.connectbot Colorized Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.colorized Chrome to Phone: http://market.android.com/details?id=com.google.android.apps.chrometophone CardStar: http://market.android.com/details?id=com.cardstar.android Books: http://market.android.com/details?id=com.google.android.apps.books Black Ipad Toggle: http://market.android.com/details?id=com.androidapps.toggle.widget.skin.blackipad Black Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.blackglassadwtheme Bing: http://market.android.com/details?id=com.microsoft.mobileexperiences.bing BeyondPod Unlock Key: http://market.android.com/details?id=mobi.beyondpod.unlockkey BeyondPod: http://market.android.com/details?id=mobi.beyondpod BeejiveIM: http://market.android.com/details?id=com.beejive.im Beautiful Widgets Animations Addon: http://market.android.com/details?id=com.levelup.bw.forecast Beautiful Widgets: http://market.android.com/details?id=com.levelup.beautifulwidgets Beautiful Live Weather: http://market.android.com/details?id=com.levelup.beautifullive BBC News: http://market.android.com/details?id=net.jimblackler.newswidget Barnacle Wifi Tether: http://market.android.com/details?id=net.szym.barnacle Barcode Scanner: http://market.android.com/details?id=com.google.zxing.client.android ASTRO SMB Module: http://market.android.com/details?id=com.metago.astro.smb ASTRO Pro: http://market.android.com/details?id=com.metago.astro.pro ASTRO Bluetooth Module: http://market.android.com/details?id=com.metago.astro.network.bluetooth ASTRO: http://market.android.com/details?id=com.metago.astro AppBrain App Market: http://market.android.com/details?id=com.appspot.swisscodemonkeys.apps App Drawer Icon Pack: http://market.android.com/details?id=com.adwtheme.appdrawericonpack androidVNC: http://market.android.com/details?id=android.androidVNC AndroidGuys: http://market.android.com/details?id=com.handmark.mpp.AndroidGuys Android System Info: http://market.android.com/details?id=com.electricsheep.asi AndFTP: http://market.android.com/details?id=lysesoft.andftp ADWTheme Red: http://market.android.com/details?id=adw.theme.red ADWLauncher EX: http://market.android.com/details?id=org.adwfreak.launcher ADW.Theme.One: http://market.android.com/details?id=org.adw.theme.one ADW.Faded theme: http://market.android.com/details?id=com.xrcore.adwtheme.faded ADW Gingerbread: http://market.android.com/details?id=me.robertburns.android.adwtheme.gingerbread Advanced Task Killer Free: http://market.android.com/details?id=com.rechild.advancedtaskkiller Adobe Reader: http://market.android.com/details?id=com.adobe.reader Adobe Flash Player 10.1: http://market.android.com/details?id=com.adobe.flashplayer Adobe AIR: http://market.android.com/details?id=com.adobe.air 3G Auto OnOff: http://market.android.com/details?id=com.yuantuo --- Generated by ShareMyApps http://market.android.com/details?id=com.mattlary.shareMyApps Sent from my Droid

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • Behind ASP.NET MVC Mock Objects

    - by imran_ku07
       Introduction:           I think this sentence now become very familiar to ASP.NET MVC developers that "ASP.NET MVC is designed with testability in mind". But what ASP.NET MVC team did for making applications build with ASP.NET MVC become easily testable? Understanding this is also very important because it gives you some help when designing custom classes. So in this article i will discuss some abstract classes provided by ASP.NET MVC team for the various ASP.NET intrinsic objects, including HttpContext, HttpRequest, and HttpResponse for making these objects as testable. I will also discuss that why it is hard and difficult to test ASP.NET Web Forms.      Description:           Starting from Classic ASP to ASP.NET MVC, ASP.NET Intrinsic objects is extensively used in all form of web application. They provide information about Request, Response, Server, Application and so on. But ASP.NET MVC uses these intrinsic objects in some abstract manner. The reason for this abstraction is to make your application testable. So let see the abstraction.           As we know that ASP.NET MVC uses the same runtime engine as ASP.NET Web Form uses, therefore the first receiver of the request after IIS and aspnet_filter.dll is aspnet_isapi.dll. This will start the application domain. With the application domain up and running, ASP.NET does some initialization and after some initialization it will call Application_Start if it is defined. Then the normal HTTP pipeline event handlers will be executed including both HTTP Modules and global.asax event handlers. One of the HTTP Module is registered by ASP.NET MVC is UrlRoutingModule. The purpose of this module is to match a route defined in global.asax. Every matched route must have IRouteHandler. In default case this is MvcRouteHandler which is responsible for determining the HTTP Handler which returns MvcHandler (which is derived from IHttpHandler). In simple words, Route has MvcRouteHandler which returns MvcHandler which is the IHttpHandler of current request. In between HTTP pipeline events the handler of ASP.NET MVC, MvcHandler.ProcessRequest will be executed and shown as given below,          void IHttpHandler.ProcessRequest(HttpContext context)          {                    this.ProcessRequest(context);          }          protected virtual void ProcessRequest(HttpContext context)          {                    // HttpContextWrapper inherits from HttpContextBase                    HttpContextBase ctxBase = new HttpContextWrapper(context);                    this.ProcessRequest(ctxBase);          }          protected internal virtual void ProcessRequest(HttpContextBase ctxBase)          {                    . . .          }             HttpContextBase is the base class. HttpContextWrapper inherits from HttpContextBase, which is the parent class that include information about a single HTTP request. This is what ASP.NET MVC team did, just wrap old instrinsic HttpContext into HttpContextWrapper object and provide opportunity for other framework to provide their own implementation of HttpContextBase. For example           public class MockHttpContext : HttpContextBase          {                    . . .          }                     As you can see, it is very easy to create your own HttpContext. That's what did the third party mock frameworks like TypeMock, Moq, RhinoMocks, or NMock2 to provide their own implementation of ASP.NET instrinsic objects classes.           The key point to note here is the types of ASP.NET instrinsic objects. In ASP.NET Web Form and ASP.NET MVC. For example in ASP.NET Web Form the type of Request object is HttpRequest (which is sealed) and in ASP.NET MVC the type of Request object is HttpRequestBase. This is one of the reason that makes test in ASP.NET WebForm is difficult. because their is no base class and the HttpRequest class is sealed, therefore it cannot act as a base class to others. On the other side ASP.NET MVC always uses a base class to give a chance to third parties and unit test frameworks to create thier own implementation ASP.NET instrinsic object.           Therefore we can say that in ASP.NET MVC, instrinsic objects are of type base classes (for example HttpContextBase) .Actually these base classes had it's own implementation of same interface as the intrinsic objects it abstracts. It includes only virtual members which simply throws an exception. ASP.NET MVC also provides the corresponding wrapper classes (for example, HttpRequestWrapper) which provides a concrete implementation of the base classes in the form of ASP.NET intrinsic object. Other wrapper classes may be defined by third parties in the form of a mock object for testing purpose.           So we can say that a Request object in ASP.NET MVC may be HttpRequestWrapper or may be MockRequestWrapper(assuming that MockRequestWrapper class is used for testing purpose). Here is list of ASP.NET instrinsic and their implementation in ASP.NET MVC in the form of base and wrapper classes. Base Class Wrapper Class ASP.NET Intrinsic Object Description HttpApplicationStateBase HttpApplicationStateWrapper Application HttpApplicationStateBase abstracts the intrinsic Application object HttpBrowserCapabilitiesBase HttpBrowserCapabilitiesWrapper HttpBrowserCapabilities HttpBrowserCapabilitiesBase abstracts the HttpBrowserCapabilities class HttpCachePolicyBase HttpCachePolicyWrapper HttpCachePolicy HttpCachePolicyBase abstracts the HttpCachePolicy class HttpContextBase HttpContextWrapper HttpContext HttpContextBase abstracts the intrinsic HttpContext object HttpFileCollectionBase HttpFileCollectionWrapper HttpFileCollection HttpFileCollectionBase abstracts the HttpFileCollection class HttpPostedFileBase HttpPostedFileWrapper HttpPostedFile HttpPostedFileBase abstracts the HttpPostedFile class HttpRequestBase HttpRequestWrapper Request HttpRequestBase abstracts the intrinsic Request object HttpResponseBase HttpResponseWrapper Response HttpResponseBase abstracts the intrinsic Response object HttpServerUtilityBase HttpServerUtilityWrapper Server HttpServerUtilityBase abstracts the intrinsic Server object HttpSessionStateBase HttpSessionStateWrapper Session HttpSessionStateBase abstracts the intrinsic Session object HttpStaticObjectsCollectionBase HttpStaticObjectsCollectionWrapper HttpStaticObjectsCollection HttpStaticObjectsCollectionBase abstracts the HttpStaticObjectsCollection class      Summary:           ASP.NET MVC provides a set of abstract classes for ASP.NET instrinsic objects in the form of base classes, allowing someone to create their own implementation. In addition, ASP.NET MVC also provide set of concrete classes in the form of wrapper classes. This design really makes application easier to test and even application may replace concrete implementation with thier own implementation, which makes ASP.NET MVC very flexable.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Next Phase of ECM 11g Now Available - New UCM & URM 11g, & Updated I/PM & IRM 11g

    - by michelle.huff
    We're excited to announce that the Oracle Enterprise Content Management Suite 11g is now available! Today, Oracle announced ECM Suite 11g, a part of Fusion Middleware 11gR1 Patchset 2 release, which builds upon the Imaging and Process Management (I/PM) and Information Rights Management (IRM) 11g release earlier this year. Universal Content Management (UCM) and Universal Records Management (URM) 11g are now available with many new features and enhancements. All ECM products are localized into 27 languages, use a single repository, a single installer, centralized administration, and all run on the same Fusion Middleware tech stack. Oracle ECM Suite 11g, is better integrated to fit the way you work, with extreme performance and extreme scalability. Universal Content Management One click Web content management - brings Web content management authoring, design and presentation capabilities directly into how organizations design sites, portals, and custom Web applications. Simply take in the right amount of WCM that meets your needs - all without having to rewrite the application or port it over to a new technology stack or framework. Greater business user empowerment - with next generation desktop integrations and "smart productivity folders", new Web site "design mode" for business users, and enhanced rich media support enabling users to better work with photography, graphics, videos & podcasts created today as well as contribute content within Flash files directly from the Web. Advanced manageability with extreme performance & scalability - centralized system monitoring, installation, logging, performance metrics & diagnostics, with new built in "fast check-in" features, redesigned component management interface - all running on Fusion Middleware infrastructure. Universal Records Management Enhanced user experience: Oracle URM 11g makes records management easier for both business users and records administrators. Simplifications in the end user experience allow the creation of bookmarks into often-used part of the file plan, easy copying of categories and dispositions, and integrated folder and records search. The records management dashboard provides a consolidated view into records administrator tasks and system performance. DoD 5015.02 v3: Oracle URM is fully certified against all part of the US Department of Defense records management standard - baseline, classified, and Freedom of Information and Privacy Act. This enables Federal, state, & local governments & public agencies, as well as private companies, to maintain regulated compliance. Expanded functionality through Oracle integrations: Oracle URM 11g allows for an expanded set of functionality through integration capabilities with other Oracle products. This includes configurable records definition capabilities directly within a UCM instance. An out of the box integration with Oracle BI Publisher provides easily configured and robust reporting. Additionally, 11g offers an out of the box Oracle Secure Enterprise Search integration enabling real time full text discovery across disparate systems in an organization. Read the Press Release Watch the 3 Minute ECM 11g Video Get Up to Speed with the What's New in ECM Suite Datasheet Learn More on OTN with new tutorials, downloads and whitepapers

    Read the article

  • Fan running continously on HP Pavillion G6 notebook with 12.04.1 LTS, help please?

    - by Ankit
    Fan is running continously on my HP Pavillion G6 notebook with 12.04.1 LTS. My system specifications are:- Ram: 6Gb Graphics Card:- 1 GB (AMD Raedon 64XX). HDD: 540 GB. Please find a list of ACPI errors logs from dmesg as follows:- buffer@ankit:~$ dmesg | grep ACPI -i [ 0.000000] BIOS-e820: 000000009cebf000 - 000000009cfbf000 (ACPI NVS) [ 0.000000] BIOS-e820: 000000009cfbf000 - 000000009cfff000 (ACPI data) [ 0.000000] ACPI: RSDP 00000000000fe020 00024 (v02 HPQOEM) [ 0.000000] ACPI: XSDT 000000009cffe120 00084 (v01 HPQOEM SLIC-MPC 00000001 01000013) [ 0.000000] ACPI: FACP 000000009cffc000 000F4 (v04 HPQOEM SLIC-MPC 00000001 MSFT 01000013) [ 0.000000] ACPI: DSDT 000000009cfec000 0C132 (v01 HP 1670 00000000 MSFT 01000013) [ 0.000000] ACPI: FACS 000000009cf6c000 00040 [ 0.000000] ACPI: ASF! 000000009cffd000 000A5 (v32 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: HPET 000000009cffb000 00038 (v01 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: APIC 000000009cffa000 0008C (v02 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: MCFG 000000009cff9000 0003C (v01 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: SLIC 000000009cfeb000 00176 (v01 HPQOEM SLIC-MPC 00000001 MSFT 01000013) [ 0.000000] ACPI: SSDT 000000009cfea000 00D52 (v01 HP 1670 00001000 MSFT 01000013) [ 0.000000] ACPI: BOOT 000000009cfe8000 00028 (v01 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: ASPT 000000009cfe5000 00034 (v07 HP 1670 00000001 MSFT 01000013) [ 0.000000] ACPI: SSDT 000000009cfe4000 00780 (v01 HP 1670 00003000 INTL 20100121) [ 0.000000] ACPI: SSDT 000000009cfe3000 00996 (v01 HP 1670 00003000 INTL 20100121) [ 0.000000] ACPI: SSDT 000000009cfdd000 0219F (v01 HP 1670 00001000 INTL 20100121) [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] ACPI: PM-Timer IO Port: 0x408 [ 0.000000] ACPI: Local APIC address 0xfee00000 [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x03] enabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x00] disabled) [ 0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x00] disabled) [ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: IRQ0 used by override. [ 0.000000] ACPI: IRQ2 used by override. [ 0.000000] ACPI: IRQ9 used by override. [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.005902] ACPI: Core revision 20110623 [ 0.536006] PM: Registering ACPI NVS region at 9cebf000 (1048576 bytes) [ 0.538423] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 0.538429] ACPI: bus type pci registered [ 0.656088] ACPI: Added _OSI(Module Device) [ 0.656094] ACPI: Added _OSI(Processor Device) [ 0.656098] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.656103] ACPI: Added _OSI(Processor Aggregator Device) [ 0.660335] ACPI: EC: Look up EC in DSDT [ 0.664416] ACPI: Executed 1 blocks of module-level executable AML code [ 0.728303] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 0.729536] ACPI: SSDT 000000009ce70798 00727 (v01 PmRef Cpu0Cst 00003001 INTL 20100121) [ 0.730622] ACPI: Dynamic OEM Table Load: [ 0.730630] ACPI: SSDT (null) 00727 (v01 PmRef Cpu0Cst 00003001 INTL 20100121) [ 0.760829] ACPI: SSDT 000000009ce71a98 00303 (v01 PmRef ApIst 00003000 INTL 20100121) [ 0.761992] ACPI: Dynamic OEM Table Load: [ 0.761998] ACPI: SSDT (null) 00303 (v01 PmRef ApIst 00003000 INTL 20100121) [ 0.792451] ACPI: SSDT 000000009ce6fd98 00119 (v01 PmRef ApCst 00003000 INTL 20100121) [ 0.793521] ACPI: Dynamic OEM Table Load: [ 0.793528] ACPI: SSDT (null) 00119 (v01 PmRef ApCst 00003000 INTL 20100121) [ 0.872981] ACPI: Interpreter enabled [ 0.872992] ACPI: (supports S0 S3 S4 S5) [ 0.873064] ACPI: Using IOAPIC for interrupt routing [ 0.882723] ACPI: EC: GPE = 0x16, I/O: command/status = 0x66, data = 0x62 [ 0.883072] ACPI: No dock devices found. [ 0.883084] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 0.883882] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) [ 0.924187] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT] [ 0.924509] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP01._PRT] [ 0.924581] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP02._PRT] [ 0.924659] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP03._PRT] [ 0.924758] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEG0._PRT] [ 0.924973] pci0000:00: Requesting ACPI _OSC control (0x1d) [ 0.925064] pci0000:00: ACPI _OSC request failed (AE_ERROR), returned control mask: 0x1d [ 0.925069] ACPI _OSC control for PCIe not granted, disabling ASPM [ 0.930212] ACPI: PCI Interrupt Link [LNKA] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930327] ACPI: PCI Interrupt Link [LNKB] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930436] ACPI: PCI Interrupt Link [LNKC] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930547] ACPI: PCI Interrupt Link [LNKD] (IRQs 1 3 4 5 6 *10 11 12 14 15) [ 0.930655] ACPI: PCI Interrupt Link [LNKE] (IRQs 1 3 4 5 6 10 11 12 14 15) *0, disabled. [ 0.930764] ACPI: PCI Interrupt Link [LNKF] (IRQs 1 3 4 5 6 10 11 12 14 15) *0, disabled. [ 0.930873] ACPI: PCI Interrupt Link [LNKG] (IRQs 1 3 4 5 6 10 *11 12 14 15) [ 0.930979] ACPI: PCI Interrupt Link [LNKH] (IRQs 1 3 4 5 6 10 11 12 14 15) *0, disabled. [ 0.932142] PCI: Using ACPI for IRQ routing [ 0.967119] pnp: PnP ACPI init [ 0.967151] ACPI: bus type pnp registered [ 0.968356] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) [ 0.968516] pnp 00:01: Plug and Play ACPI device, IDs PNP0200 (active) [ 0.968586] pnp 00:02: Plug and Play ACPI device, IDs INT0800 (active) [ 0.968818] pnp 00:03: Plug and Play ACPI device, IDs PNP0103 (active) [ 0.968915] pnp 00:04: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.969206] system 00:05: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.969293] pnp 00:06: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.969418] pnp 00:07: Plug and Play ACPI device, IDs PNP0303 (active) [ 0.969528] pnp 00:08: Plug and Play ACPI device, IDs SYN1e3f SYN1e00 SYN0002 PNP0f13 (active) [ 0.969969] system 00:09: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.970574] system 00:0a: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.970617] pnp: PnP ACPI: found 11 devices [ 0.970622] ACPI: ACPI bus type pnp unregistered [ 1.138064] ACPI: Deprecated procfs I/F for AC is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 1.138331] ACPI: AC Adapter [ACAD] (off-line) [ 1.139068] ACPI: Lid Switch [LID0] [ 1.139176] ACPI: Power Button [PWRB] [ 1.139286] ACPI: Power Button [PWRF] [ 1.144637] ACPI: Thermal Zone [TZ01] (0 C) [ 1.144677] ACPI: Deprecated procfs I/F for battery is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 1.144693] ACPI: Battery Slot [BAT0] (battery present) [ 1.206926] ACPI: Battery Slot [BAT0] (battery present) [ 13.176993] acpi device:1a: registered as cooling_device4 [ 13.179931] acpi device:1b: registered as cooling_device5 [ 13.180221] ACPI: Video Device [VGA] (multi-head: yes rom: no post: no) [ 13.219589] acpi device:20: registered as cooling_device6 [ 13.220851] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 1649.915134] i8042 aux 00:08: wake-up capability disabled by ACPI [ 1649.915147] i8042 kbd 00:07: wake-up capability enabled by ACPI [ 1650.931028] r8169 0000:03:00.0: wake-up capability enabled by ACPI [ 1650.954743] ehci_hcd 0000:00:1d.0: wake-up capability enabled by ACPI [ 1650.978733] ehci_hcd 0000:00:1a.0: wake-up capability enabled by ACPI [ 1651.010950] ACPI: Preparing to enter system sleep state S3 [ 1652.251505] ACPI: Low-level resume complete [ 1652.360953] ACPI: Waking up from system sleep state S3 [ 1652.427581] ehci_hcd 0000:00:1a.0: wake-up capability disabled by ACPI [ 1652.435579] ehci_hcd 0000:00:1d.0: wake-up capability disabled by ACPI [ 1652.437887] r8169 0000:03:00.0: wake-up capability disabled by ACPI [ 1652.506660] i8042 kbd 00:07: wake-up capability disabled by ACPI [ 1661.238234] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [ 1661.238253] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [ 1661.238268] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [ 3151.784288] i8042 aux 00:08: wake-up capability disabled by ACPI [ 3151.784301] i8042 kbd 00:07: wake-up capability enabled by ACPI [ 3152.797676] r8169 0000:03:00.0: wake-up capability enabled by ACPI [ 3152.821379] ehci_hcd 0000:00:1d.0: wake-up capability enabled by ACPI [ 3152.845367] ehci_hcd 0000:00:1a.0: wake-up capability enabled by ACPI [ 3152.877600] ACPI: Preparing to enter system sleep state S3 [ 3154.313213] ACPI: Low-level resume complete [ 3154.422297] ACPI: Waking up from system sleep state S3 [ 3154.489692] ehci_hcd 0000:00:1a.0: wake-up capability disabled by ACPI [ 3154.497667] ehci_hcd 0000:00:1d.0: wake-up capability disabled by ACPI [ 3154.505947] r8169 0000:03:00.0: wake-up capability disabled by ACPI [ 3154.568985] i8042 kbd 00:07: wake-up capability disabled by ACPI [ 3162.745149] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [ 3162.745168] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [ 3162.745183] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [ 6775.723501] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [ 6775.723519] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [ 6775.723535] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [10388.004760] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [10388.004778] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [10388.004801] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/psparse-536) [10723.591930] i8042 aux 00:08: wake-up capability disabled by ACPI [10723.591942] i8042 kbd 00:07: wake-up capability enabled by ACPI [10724.607624] r8169 0000:03:00.0: wake-up capability enabled by ACPI [10724.631349] ehci_hcd 0000:00:1d.0: wake-up capability enabled by ACPI [10724.655339] ehci_hcd 0000:00:1a.0: wake-up capability enabled by ACPI [10724.687572] ACPI: Preparing to enter system sleep state S3 [10726.123176] ACPI: Low-level resume complete [10726.232181] ACPI: Waking up from system sleep state S3 [10726.303653] ehci_hcd 0000:00:1a.0: wake-up capability disabled by ACPI [10726.311648] ehci_hcd 0000:00:1d.0: wake-up capability disabled by ACPI [10726.315734] r8169 0000:03:00.0: wake-up capability disabled by ACPI [10726.379287] i8042 kbd 00:07: wake-up capability disabled by ACPI [10734.393523] ACPI Error: No handler for Region [CMS0] (ffff8801d5035558) [SystemCMOS] (20110623/evregion-373) [10734.393542] ACPI Error: Region SystemCMOS (ID=5) has no handler (20110623/exfldio-292) [10734.393557] ACPI Error: Method parse/execution failed [\_SB_.PCI0.LPCB.EC0_._Q33] (Node ffff8801d5054de8), AE_NOT_EXIST (20110623/ps Continuous sound from the fan is very annoying, any help would highly appreciated.

    Read the article

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

  • Woes of a Junior Developer - is it possible to not be cut out for programming?

    - by user575158
    (Let me start off by asking - please be gentle, I know this is subjective, but it's meant to incite discussion and provide information for others. If needed it can be converted to community wiki.) I recently was hired as a junior developer at a company I really like. I started out in the field doing QA and transitioned into more and more development work, which is what I really want to end up doing. I enjoy it, but more and more I am questioning whether I am really any good at it or not. Part of this is still growing into the junior developer role, I know, but how much? What are junior developers to expect, what should they be doing and not doing? What can I do to improve and show my company I am serious about this opportunity? I hate that I am costing them time by getting up to speed. I've been told by others that companies make investments in Junior devs and don't expect them to pay off for a while, but how much of this is true? There's got to be a point when it's apparent whether the investment will pay off or not. So far I've been trying to ask as many questions I can, but I've you've been obsessing over a simple problem for some time and the others know that, there comes a time when it's pretty embarrassing to have to get help after struggling so long. I've also tried to be as open to suggestion as possible and work with others to try to refactor my code, but sometimes this can be hard clashing with various team members' personal opinions (being told by someone to write it one way, and then having someone else make you rewrite it). I often get over-stressed and judge myself too harshly, but I just don't want to have to struggle the rest of my life trying to get things work if I just don't have the talent. In your experience, is programming something that almost everyone can learn, or something that some people just don't get? Do others feel this way, or did you feel that way when starting out? It scares me that I have no other job skills should I be unsuited for having the skills necessary to code well.

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >