Search Results

Search found 47383 results on 1896 pages for 'version control migration'.

Page 353/1896 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Unable to stop chrome.exe *32

    - by chipperyman573
    So I was installing roboform today and was unable to stop the process chrome.exe *32... Even when I uninstalled chrome. This is the error I got: I used lockhunter and it said it was located in %appdata%\Local\Google\Chrome. However, it was unable to unlock, delete or rename. When I use explorer to delete or rename that folder, it says it's being used by Chrome. Even after restarting my computer it still does this. I've tried using the built in chrome task manager (Wrench View Background Pages) and I can't seem to find a process there that has the same amount of memory. I have run many, many virus scans, by: Microsoft security essentials AVG (Free version) Malwarebytes (Pro version) Norton 360 (Pro version) McAfee (Pro Version) Avira (Free version) Avast! Antivirus (Free version) None of which returned with any viruses. Chrome info: Google Chrome 23.0.1271.95 (Official Build 169798) OS Windows 7 Professional WebKit 537.11 (@135931) JavaScript V8 3.13.7.5 Flash 11.5.31.2 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11

    Read the article

  • How can I use the Homebrew Python with Homebrew MacVim on Mountain Lion?

    - by Stephen Jennings
    I originally asked and answered this question: How can I use the Homebrew Python version with Homebrew MacVim? These instructions worked on Snow Leopard using Xcode 4.0.1 and associated developer tools. However, they no longer seem to work on Mountain Lion with Xcode 4.4.1. My goal is to leave the system's version of Python completely untouched, and to only install PyPI packages into Homebrew's site-packages directory. I want to use the vim_bridge package in MacVim, so I need to compile MacVim against the Homebrew version of Python. I've edited the MacVim formula to add these to the arguments: --enable-pythoninterp=dynamic --with-python-config-dir=/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config Then I install with the command: brew install macvim --override-system-vim --custom-icons --with-cscope --with-lua However, it still seems to be somehow using Python 2.7.2 from the system. This seems strange to me because it also seems to be using the correct executable. :python print(sys.version) 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] :python print(sys.executable) /usr/local/bin/python $ /usr/local/bin/python --version Python 2.7.3 $ /usr/local/bin/python -c "import sys; print(sys.version)" 2.7.3 (default, Aug 12 2012, 21:17:22) [GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.60))] $ readlink /usr/local/lib/python2.7/config /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config I've removed everything in /usr/local and reinstalled Homebrew by running these commands: $ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) $ brew install git mercurial python ruby $ brew install macvim (nope, still broken) $ brew remove macvim $ ln -s /usr/local/Cellar/python/..../python2.7/config /usr/local/lib/python2.7/config $ brew install macvim

    Read the article

  • Does Windows 8 RTM Support VB6 (SP6) Runtime files? If so, which ones?

    - by user51047
    Basically, I'm trying to find out which of the following files come packaged with the Windows 8 RTM (that is, the final version). Just to be clear, we're not wanting to know if any of the runtime files (listed below) are or were included with any of the previous versions (Beta, CTP, RS etc) or releases of Windows 8 - we are just interested in this compatibility question as far as Windows 8 RTM (Final Version) is concerned. In addition, if possible, we would also like to know which of the below files (if any) come shipped and registered with the Windows 8 RT (on ARM) version. As far as the ARM version is concerned, you're welcome to base your answer on the latest version of Windows 8 RT (on ARM) that is available at the date and time your answer is posted. (This will also serve to future-proof this question as additional releases or versions of Windows 8 and Windows 8 RT on ARM come out). Here are the list of files (which are basically the VB6 SP6 runtime files): File name Version Size Asycfilt.dll 2.40.4275.1 144 KB (147,728 bytes) Comcat.dll 4.71.1460.1 21.7 KB (22,288 bytes) Msvbvm60.dll 6.0.97.82 1.32 MB (1,386,496 bytes) Oleaut32.dll 2.40.4275.1 584 KB (598,288 bytes) Olepro32.dll 5.0.4275.1 160 KB (164,112 bytes) Stdole2.tlb 2.40.4275.1 17.5 KB (17,920 bytes) Of course, the most important file in there is MSVBVM60.DLL, so if you cannot provide details for all files relating to both Windows Releases, then basing the answer on as many of the files possible would also be useful. Thank you for reading and for your anticipated assistance in putting this question/answer on record.

    Read the article

  • How to fix Apache from crashing with PHP+Curl on an SSH request?

    - by Jason Cohen
    My Apache process segfaults whenever I call curl_exec() from PHP with an "https://" URL. If I use http instead of https as the URL transport, it works perfectly, so I know curl and the other curl options are correct. I can use curl from the command-line on that server using the https version of the URL and it works perfectly, so I know the remote server is responding correctly, the cert isn't expired, etc.. My server is: Linux 2.6.32-21-server #32-Ubuntu SMP Fri Apr 16 09:17:34 UTC 2010 x86_64 GNU/Linux My Apache version is: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:21:26 My PHP version is: PHP 5.3.2-1ubuntu4.2 with Suhosin-Patch (cli) (built: May 13 2010 20:03:45) Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies My PHP curl module info is: cURL support => enabled cURL Information => 7.19.7 Age => 3 Features AsynchDNS => No Debug => No GSS-Negotiate => Yes IDN => Yes IPv6 => Yes Largefile => Yes NTLM => Yes SPNEGO => No SSL => Yes SSPI => No krb4 => No libz => Yes CharConv => No Protocols => tftp, ftp, telnet, dict, ldap, ldaps, http, file, https, ftps Host => x86_64-pc-linux-gnu SSL Version => OpenSSL/0.9.8k ZLib Version => 1.2.3.3

    Read the article

  • Unable to use OpenGL or install nVidia driver on openSUSE 12.2

    - by djechelon
    I have an ASUS N76VZ laptop with 12.2 openSUSE and GeForce GT650M card. I found that KDE doesn't allow me to use OpenGL rendering. I tried to install nVidia's driver from script but once it writes the xorg.conf file I'm unable to boot desktop. I have the following errors in system log Oct 30 08:28:13 RAYNOR kdm[2727]: X server died during startup Oct 30 08:28:13 RAYNOR kdm[2727]: X server for display :0 cannot be started, session disabled I noticed that the /etc/X11/xorg.conf backup file was empty, so I renamed the new xorg.conf and left none: the desktop booted!!! How can I fix OpenGL rendering with or without driver installation? [Update]: Xorg.0.log says [ 1434.207] compiled for 4.0.2, module version = 1.0.0 [ 1434.207] Module class: X.Org Server Extension [ 1434.207] (II) NVIDIA GLX Module 304.60 Sun Oct 14 20:44:54 PDT 2012 [ 1434.207] (II) Loading extension GLX [ 1434.207] (II) LoadModule: "record" [ 1434.207] (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so [ 1434.207] (II) Module record: vendor="X.Org Foundation" [ 1434.207] compiled for 1.12.3, module version = 1.13.0 [ 1434.207] Module class: X.Org Server Extension [ 1434.207] ABI class: X.Org Server Extension, version 6.0 [ 1434.207] (II) Loading extension RECORD [ 1434.207] (II) LoadModule: "dri" [ 1434.207] (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so [ 1434.207] (II) Module dri: vendor="X.Org Foundation" [ 1434.207] compiled for 1.12.3, module version = 1.0.0 [ 1434.207] ABI class: X.Org Server Extension, version 6.0 [ 1434.207] (II) Loading extension XFree86-DRI [ 1434.207] (II) LoadModule: "nvidia" [ 1434.208] (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so [ 1434.208] (II) Module nvidia: vendor="NVIDIA Corporation" [ 1434.208] compiled for 4.0.2, module version = 1.0.0 [ 1434.208] Module class: X.Org Video Driver [ 1434.208] (II) NVIDIA dlloader X Driver 304.60 Sun Oct 14 20:24:42 PDT 2012 [ 1434.208] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 1434.208] (++) using VT number 8 [ 1434.320] (EE) No devices detected. [ 1434.320] Fatal server error: [ 1434.320] no screens found [ 1434.320] Please consult the The X.Org Foundation support at http://wiki.x.org for help.

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • What is auto-mounting my media volume?

    - by user285277
    Something is repeatedly mounting my "media" share, doing something with it, then quietly un-mounting it with no notifications at the user level. from the little I can gleaned from the console messages below, I thought I'd managed to stop it, if not understand it last night when I followed instructions for deleting all traces of the Google Update Daemon. I've not been using any Google apps whatsoever, so I was surprised to see that in Console. What's more surprising, and perhaps a little distressing, is that the same thing occurred this evening, when the Google Daemon is long gone. I don't have that log because I can't recall precisely what time it occurred. I'll do a search and provide it if you wish, though. In the meantime, any help with this would be extremely appreciated. I've asked over at Apple Discussions but I think it might be a little deeper than those manning the boards this weekend are comfortable with. It's certainly beyond my meager skills. With apologies in advance if this is more lines thank you need. Please note that I've transformed the Google links a little because the forum here requires more reputation points before one can post more than two links. 12/27/13 10:47:31.000 PM kernel[0]: memorystatus_thread: idle exiting pid 53629 [distnoted] 12/27/13 10:48:10.433 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 0; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.434 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 1; not stale; error: The file doesn’t exist. 12/27/13 10:48:10.950 PM com.apple.SecurityServer[17]: Session 103257 created 12/27/13 10:48:34.328 PM com.apple.Preview.TrustedBookmarksService[53640]: Failed to resolve bookmark data at index: 2; not stale; error: The file doesn’t exist. 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount: /Volumes/Media Archive-1, pid 53641 12/27/13 10:48:34.000 PM kernel[0]: AFP_VFS afpfs_mount : succeeded on volume 0xffffff80d6355008 /Volumes/Media Archive-1 (error = 0, retval = 0) 12/27/13 10:49:32.000 PM kernel[0]: wlEvent: en0 en0 Link DOWN virtIf = 0 12/27/13 10:49:32.000 PM kernel[0]: AirPort: Link Down on en0. Reason 8 (Disassociated because station leaving). 12/27/13 10:49:32.000 PM kernel[0]: en0::IO80211Interface::postMessage bssid changed 12/27/13 10:49:33.681 PM configd[16]: network changed: v4(en0-:10.0.1.12) DNS- Proxy- SMB 12/27/13 10:49:33.697 PM configd[16]: network changed: DNS* Proxy 12/27/13 10:49:35.475 PM KernelEventAgent[57]: tid 00000000 received event(s) VQ_NOTRESP (1) 12/27/13 10:49:35.000 PM kernel[0]: ASP_TCP Disconnect: triggering reconnect by bumping reconnTrigger from curr value 0 on so 0xffffff802176b4a0 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect started /Volumes/Media Archive-1 prevTrigger 0 currTrigger 1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: doing reconnect on /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: posting to KEA EINPROGRESS for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: Max reconnect time: 600 secs, Connect timeout: 15 secs for /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:35.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 1 seconds and then try again 12/27/13 10:49:35.479 PM KernelEventAgent[57]: tid 00000000 type 'afpfs', mounted on '/Volumes/Media Archive-1', from '//Me@Capsule._afpovertcp._tcp.local/Media%20Archive', not responding 12/27/13 10:49:35.487 PM KernelEventAgent[57]: tid 00000000 found 1 filesystem(s) with problem(s) 12/27/13 10:49:36.692 PM com.bourgeoisbits.cloak.agent[14503]: NetworkProfile: (null), (null), (null) (Connected: NO, Airport: NO, Open: NO) [trusted] 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:36.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 2 seconds and then try again 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:38.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 4 seconds and then try again 12/27/13 10:49:41.000 PM kernel[0]: CODE SIGNING: cs_invalid_page(0x1000): p=53662[GoogleSoftwareUp] clearing CS_VALID 12/27/13 10:49:42.102 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 2 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:42.113 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateProductID:] KSUpdateEngine updating product ID: "com.google.Keystone" 12/27/13 10:49:42.116 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 1 ticket(s). 12/27/13 10:49:42.121 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x531870 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x5302d0 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x534340 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:42.130 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x1a31a90 server=<KSOmahaServer:0x534340> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=1 activeTickets=1 rollCallTickets=1 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:42.291 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:42.292 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:42.295 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:42.296 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:42.299 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:42.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 8 seconds and then try again 12/27/13 10:49:43.152 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine updateAllProducts] KSUpdateEngine updating all installed products. 12/27/13 10:49:43.153 PM GoogleSoftwareUpdateDaemon[53663]: -[KSCheckAction performAction] KSCheckAction checking 2 ticket(s). 12/27/13 10:49:43.158 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction starting update check for ticket(s): {( <KSTicket:0x18367a0 productID=com.google.Keystone version=1.1.0.3659 xc=<KSPathExistenceChecker:0x1837e10 path=/Library/Google/GoogleSoftwareUpdate/GoogleSoftwareUpdate.bundle/> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 >, <KSTicket:0x1834750 productID=com.google.talkplugin version=4.7.0.15362 xc=<KSPathExistenceChecker:0x1833890 path=/Library/Application Support/Google/GoogleTalkPlugin.app> serverType=Omaha url=htt[PeeEs]://tools.google.com/service/update2 creationDate=2012-08-12 14:47:10 > )} Using server: <KSOmahaServer:0x52e930 engine=<KSDaemonUpdateEngine:0x52e530> params={ EngineVersion = "1.1.0.3659"; ActivesInfo = { "com.google.talkplugin" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; }; "com.google.Keystone" = { LastRollCallPingDate = 2013-10-06 07:00:00 +0000; LastActivePingDate = 2013-10-06 07:00:00 +0000; LastActiveDate = 2013-12-28 03:49:42 +0000; }; "com.google.picasa" = { LastActiveDate = 2012-08-29 10:15:42 +0000; }; }; UserInitiated = 0; IsSystem = 1; OmahaOSVersion = "10.8.5_i486"; Identity = KeystoneDaemon; AllowedSubdomains = ( ".omaha.sandbox.google.com", ".tools.google.com", ".www.google.com", ".corp.google.com" ); } > 12/27/13 10:49:43.159 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction performAction] KSUpdateCheckAction running KSServerUpdateRequest: <KSOmahaServerUpdateRequest:0x53a210 server=<KSOmahaServer:0x52e930> url="htt[PeeEs]://tools.google.com/service/update2" runningFetchers=0 tickets=2 activeTickets=1 rollCallTickets=2 body= <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <o:gupdate xmlns:o="htt[Pee]://www.google.com/update2/request" protocol="2.0" version="KeystoneDaemon-1.1.0.3659" ismachine="1"> <o:os platform="mac" version="MacOSX" sp="10.8.5_i486"></o:os> <o:app appid="com.google.Keystone" version="1.1.0.3659" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82" a="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> <o:app appid="com.google.talkplugin" version="4.7.0.15362" lang="en-us" installage="502" brand="GGLG"> <o:ping r="82"></o:ping> <o:updatecheck></o:updatecheck> </o:app> </o:gupdate> > 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSOutOfProcessFetcher(PrivateMethods) helperDidTerminate:] The Internet connection appears to be offline. [NSURLErrorDomain:-1009] 12/27/13 10:49:43.243 PM GoogleSoftwareUpdateDaemon[53663]: -[KSServerUpdateRequest(PrivateMethods) fetcher:failedWithError:] KSServerUpdateRequest fetch failed. (productIDs: com.google.Keystone, ... (2)) [com.google.UpdateEngine.CoreErrorDomain:702 - 'htt[PeeEs]://tools.google.com/service/update2'] (The Internet connection appears to be offline. [NSURLErrorDomain:-1009]) 12/27/13 10:49:43.244 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateCheckAction(PrivateMethods) finishAction] KSUpdateCheckAction found updates: {( )} 12/27/13 10:49:43.247 PM GoogleSoftwareUpdateDaemon[53663]: -[KSPrefetchAction performAction] KSPrefetchAction no updates to prefetch. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSSilentUpdateAction had no updates to apply. 12/27/13 10:49:43.248 PM GoogleSoftwareUpdateDaemon[53663]: -[KSMultiUpdateAction performAction] KSPromptAction had no updates to apply. 12/27/13 10:49:43.250 PM GoogleSoftwareUpdateDaemon[53663]: -[KSUpdateEngine(PrivateMethods) updateFinish] KSUpdateEngine update processing complete. 12/27/13 10:49:45.570 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon logServiceState] GoogleSoftwareUpdate daemon (1.1.0.3659) vending: com.google.Keystone.Daemon.UpdateEngine: 1 connection(s) com.google.Keystone.Daemon.Administration: 0 connection(s) 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect to the server /Volumes/Media Archive-1 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: connect on /Volumes/Media Archive-1 failed 65. 12/27/13 10:49:50.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: sleep for 10 seconds and then try again 12/27/13 10:49:53.828 PM KernelEventAgent[57]: tid 00000000 unmounting 1 filesystems 12/27/13 10:49:53.000 PM kernel[0]: AFP_VFS afpfs_unmount: /Volumes/Media Archive-1, flags 524288, pid 57 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: get the reconnect token 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_DoReconnect: GetReconnectToken failed 32 /Volumes/Media Archive-1 12/27/13 10:49:54.000 PM kernel[0]: AFP_VFS afpfs_unmount : afpfs_DoReconnect sent signal for unmount to proceed 12/27/13 10:50:12.104 PM GoogleSoftwareUpdateDaemon[53663]: -[KeystoneDaemon main] GoogleSoftwareUpdateDaemon inactive, shutdown. 12/27/13 10:50:29.396 PM Dock[93157]: no information back from LS about running process

    Read the article

  • Is there a way to run CUDA applications with the CUDA device being a secondary adapter?

    - by Slartibartfast
    I've been trying to run a CUDA program on a remote computer which has Windows 7 installed. The GPU is GeForce GTX 480. One of the problems I've been facing is that, the computer has two adapters, 1) Standard VGA Adapter 2) NVIDIA GeForce GTX 480 Even though this shows in the device manager. The desktop uses the standard VGA Adapter. I'm assuming this is because the Standard VGA is the primary adapter. Also the device manager shows that the monitor is connected to the standard VGA Adapter. In this scenario if i try to run any CUDA application it fails to recognise a CUDA capable device. Is it necessary for the NVIDIA adapter to be the primary one? Or is there any way to use CUDA when the graphics card is a secondary adapter. I've seen a few posts in the NVIDIA forums on this before, one suggests using another low cost NVIDIA card as the primary adapter, but that is currently not an option. I couldn't find any other solutions. Thanks I tried running the deviceQuery test from the NVIDIA GPU Computing Samples. This was the result i obtained CUDA Device Query (Runtime API) version (CUDART static linking) cudaGetDeviceCount FAILED CUDA Driver and Runtime version may be mismatched FAILED The driver version I'm using is 263.06. The CUDA version is 3.2 I ran the same test on my desktop which also has windows 7 and a GeForce GTX 465. The CUDA toolkit version is 3.2. The driver version was the same and the test passed, although it failed with an older driver.

    Read the article

  • GUI interfaces to ATI card behave weirdly out of the box and after updates.

    - by jdk
    My Lenovo W500 came with an ATI Mobility FireGL V5700 and both the Catalyst control center software and Vista display manager display four monitors. What's really annoying is the behaviour. My two active displays (laptop display + my external monitor) are always #s 3 and 4 respectively which doesn't make sense. This is out of the box. Additionally dragging & dropping is jumpy and displays #1 and 2 (always inactive because they don't exist to the software) are often preventing me from dragging #3 and 4 to the rightmost side. They also auto-snap to weird positions and certain sensible positions like position one directly over top of the other are not possible. The exact same annoyances are present when using the Windows Display manager too. In other words the interface is crap and I'm looking for a fix that's not wishing I had gone with nVidia instead. I've updated drivers, and Catalyst control centre. Have latest Windows and AMD/ATI updates. Any thoughts? Graphics Software Driver Packaging Version 8.563.2.1-090401a-079160C-Lenovo Provider ATI Technologies Inc. 2D Driver Version 7.01.01.849 2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/Class/{4D36E968-E325-11CE-BFC1-08002BE10318}/0001 Direct3D Version 7.14.10.0630 OpenGL Version 6.14.10.8306 Catalyst® Control Center Version 2009.0401.1328.22301 Graphics Hardware Primary Adapter Graphics Card Manufacturer Powered by ATI Graphics Chipset ATI Mobility FireGL V5700 Device ID 9591 Vendor 1002 Subsystem ID 2126 Subsystem Vendor ID 17AA Graphics Bus Capability PCI Express 2.0 Maximum Bus Setting PCI Express 2.0 x16 BIOS Version 010.088.000.021 BIOS Part Number BK-ATI VER010.088.000.021.034663 BIOS Date 2009/09/30 Memory Size 512 MB Memory Type DDR3 Core Clock in MHz 600 MHz Memory Clock in MHz 700 MHz

    Read the article

  • Upgrading to Java 7u65 breaks my Deployment Rule Set for Oracle applications

    - by Don Atreides
    My company uses an older version of an Oracle application that requires Java 6u45. Naturally we want to be secure, so we use a Deployment Rule Set to specify 6u45 for that internal application and let other applications use 7u60. Now that we're ready to upgrade the Java 7 half to 7u67, the Oracle application breaks with "Deployment Rule Set required version 1.6.0_45 not available." Of course it is available, it just can't find it for some reason. As a test, I specified that JavaTester.org should use 6u45 also and it works fine with no issues. But when I try to use the same configuration (7u67 and 6u45) against the Oracle application it fails every time. If I downgrade to 7u60, it works. 7u65 or higher, it breaks. The Oracle application hasn't changed so it must be something different in how 7u65+ is handling Deployment Rule Sets or pathing or something. I'm at a complete loss. ruleset.xml: <?xml version="1.0"?> -<ruleset version="1.0+"> -<rule> <id location="*.mycorp.com"/> <action version="1.6.0_45" permission="run"/> </rule> -<rule> <id location="http://javatester.org"/> <action version="1.6.0_45" permission="run"/> </rule> </ruleset>

    Read the article

  • Removing all traces of GNU java and openjdk and replacing with Sun JDK

    - by user61766
    I have installed latest Sun JDk. But when I do: java -version I still got OpenJDK version. So I completely removed OpenJDK. But now when I do: java -version I get even older GNU java 1.5 something libgcj. So I completely removed that too but it was asking to remove bunch of dependent apps like OpenOffice.org Writer etc. Even though I need the writer, I let it go because I do not want ever to see the face of any GNU java on my linux. So everything related to GNU java is removed. Luckily I am able to start Eclipse and it works fine and start normally (apparently using the installed Sun JDK which is what I want). But now when I run java -version I get bash: /usr/bin/java: No such file or directory Now what I need to do so that when I open any terminal window and enter java -version I should get Sun JDK version? Sun JDK is installed in /usr/java/jdk1.6.021. I also have symlinks: /usr/java/latest and /usr/java/defaults pointing to sun jdk.

    Read the article

  • Allocating More Than 4 GB Of Memory

    - by TPatti
    I am facing an issue with memory allocation. I have: Host OS: Microsoft Windows XP - Professional x64 Edition - Version 2003 - Service Pack 2. Host Physical Memory: 8 GB Guest OS: Red Hat Enterprise Linux WS release 4 (Nahant Update 5). I am not sure if it is 32 or 64 bits. The lsb_release -a command says that argument LSB Version: core-3.0-ia32, so I guess that would be 32 bits... VMware Player Version: 2.5.2 build-156735 I would like that VMware Player could allocate more that 4 GB, but when I go to the setting, it only lists 4 GB. If I choose the "About" option, it actually says that I have 8 GB installed in the host machine. This VMware image created by someone else and provided to me, apparently done with VMware Workstation 5. Why can't I allocate 8 GB? Where is the problem? In the WMware Player Version, Guest OS or Host OS? How can I solve this? I understand that for this version of player there isn't one version for 32 and another for 64 bits.

    Read the article

  • Incorporating Devise Authentication into an already existing user structure?

    - by Kevin
    I have a fully functional authentication system with a user table that has over fifty columns. It's simple but it does hash encryption with salt, uses email instead of usernames, and has two separate kinds of users with an admin as well. I'm looking to incorporate Devise authentication into my application to beef up the extra parts like email validation, forgetting passwords, remember me tokens, etc... I just wanted to see if anyone has any advice or problems they've encountered when incorporating Devise into an already existing user structure. The essential fields in my user model are: t.string :first_name, :null => false t.string :last_name, :null => false t.string :email, :null => false t.string :hashed_password t.string :salt t.boolean :is_userA, :default => false t.boolean :is_userB, :default => false t.boolean :is_admin, :default => false t.boolean :active, :default => true t.timestamps For reference sake, here's the Devise fields from the migration: t.database_authenticatable :null => false t.confirmable t.recoverable t.rememberable t.trackable That eventually turn into these actual fields in the schema: t.string "email", :default => "", :null => false t.string "encrypted_password", :limit => 128, :default => "", :null => false t.string "password_salt", :default => "", :null => false t.string "confirmation_token" t.datetime "confirmed_at" t.datetime "confirmation_sent_at" t.string "reset_password_token" t.string "remember_token" t.datetime "remember_created_at" t.integer "sign_in_count", :default => 0 t.datetime "current_sign_in_at" t.datetime "last_sign_in_at" t.string "current_sign_in_ip" t.string "last_sign_in_ip" t.datetime "created_at" t.datetime "updated_at" What do you guys recommend? Do I just remove email, hashed_password, and salt from my migration and put in the 5 Devise migration fields and everything will be OK or do I need to do something else?

    Read the article

  • Associating Models with Polymorphic

    - by Josh Crowder
    I am trying to associate Contacts with Classes but as two different types. Current_classes and Interested_classes. I know I need to enable polymorphic but I am not sure as to where it needs to be enabled. This is what I have at the moment class CreateClasses < ActiveRecord::Migration def self.up create_table :classes do |t| t.string :class_type t.string :class_name t.string :date t.timestamps end end def self.down drop_table :classes end end class CreateContactsInterestedClassesJoin < ActiveRecord::Migration def self.up create_table 'contacts_interested_classes', :id => false do |t| t.column 'class_id', :integer t.column 'contact_id', :integer end end def self.down drop_table 'contacts_interested_classes' end end class CreateContactsCurrentClassesJoin < ActiveRecord::Migration def self.up create_table 'contacts_current_classes', :id => false do |t| t.column 'class_id', :integer t.column 'contact_id', :integer end end def self.down drop_table 'contacts_current_classes' end end And then inside of my Contacts Model I want to have something like this. class Contact < ActiveRecord::Base has_and_belongs_to_many :classes, :join_table => "contacts_interested_classes", :foreign_key => "class_id" :as => 'interested_classes' has_and_belongs_to_many :classes, :join_table => "contacts_current_classes", :foreign_key => "class_id" :as => 'current_classes' end What am I doing wrong?

    Read the article

  • Controller changes format on variables when publishing

    - by Christoffer
    I am a newbie to ROR but catching on quickly. I have been working on this problem for a couple of hours now and it seems like a bug. I does not make any sense. I have a database with the following migration: class CreateWebsites < ActiveRecord::Migration def self.up create_table :websites do |t| t.string :name t.integer :estimated_value t.string :webhost t.string :purpose t.string :description t.string :tagline t.string :url t.integer :adsense t.integer :tradedoubler t.integer :affiliator t.integer :adsense_cpm t.boolean :released t.string :empire_type t.string :oldid t.string :old_outlink_policy t.string :old_inlink_policy t.string :old_priority t.string :old_profitability t.integer :priority_id t.integer :project_id t.integer :outlink_policy_id t.integer :inlink_policy_id t.timestamps end end def self.down drop_table :websites end end I have verified that what is created in the database also is integers, strings etc according to this migration. I have not touched the controller after generating it through scaffold, i.e. it is the standard controller with show, index etc. Now. When I enter data into the database, either through the web form, in rails console or directly in the database - such as www.domain.com for url or 500 for adsense - it will be created in the db without problem. However, when it is being published on the website the variables go completely nuts. Adsense (integer) turns into date, url (string) turns into a float, and so on. This only happens to a few of the variables. This will also create a problem with "argument out of range" since I input 500 and Rails will try to output it as date = crash and "argument out of range". So, how do I fix/trouble shoot this? Why do the formats change? Could it be because of the respond_to in the controller? Cheers, Christoffer

    Read the article

  • Migrating from a single entity to an abstract parent entity with child entities, NSEntityMigrationPolicy not called.

    - by Jimmy Selgen Nielsen
    Hi. I'm trying to upgrade my current application to use an abstract parent entity, with specialized sub entities. I've created a custom NSEntityMigrationPolicy, and in the mapping model I've set the Custom Policy to the name of my class. I'm initializing my persistent store like this, which should be fairly standard : NSError *error=nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, nil]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { NSLog(@"Error adding persistent store : %@",[error description]); NSAssert(error==nil,[error localizedDescription]); } When i run the app i get the following error : Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'The operation couldn’t be completed. (Cocoa error 134140.)' [error userInfo] contains "reason=Can't find mapping model for migration" I've verified that version 1 of the data model will open, and if i set NSInferMappingModelAutomaticallyOption i get a migration, although my entities are not migrated correctly (as expected). I've verified that the mapping model (cdm) is in the application bundle, but somehow it refuses to find it. I've also set breakpoints and NSLog() statements in the custom migration policy, and none of it runs, with or without NSInferMappingModelAutomaticallyOption Any hints as to why it seems unable to find the mapping model ?

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Customizing the processing of ListItems for asp:RadioButtonList with "Flow" layout and "Horizontal"

    - by evovision
    Hi, recently I was asked to add an ability to pad specific elements from each other to a certain distance in RadioButtonList control. Not quite common everyday task I would say :)   Ok, let's get started!   Prerequisites: ASP.NET Page having RadioButtonList control with RepeatLayout="Flow" RepeatDirection="Horizontal" properties set.   Implementation:  The underlying data was coming from another source, so the only fast way to add meta information about padding was the text value itself (yes, not very optimal solution): Id = 1, Name = "This is first element" and for padding we agreed to use <space/> meta tag: Id = 2, Name = "<space padcount="30px"/>This is second padded element"   To handle items rendering in RadioButtonList control I've created custom class and subclassed from it:    public class CustomRadioButtonList : RadioButtonList    {        private Action<ListItem, HtmlTextWriter> _preProcess;         protected override void RenderItem(ListItemType itemType, int repeatIndex, RepeatInfo repeatInfo, HtmlTextWriter writer)        {            if (_preProcess != null)            {                _preProcess(this.Items[repeatIndex], writer);            }             base.RenderItem(itemType, repeatIndex, repeatInfo, writer);        }         public void SetPrePrenderItemFunction(Action<ListItem, HtmlTextWriter> func)        {            _preProcess = func;        }    }   It is pretty straightforward approach, the key is to override RenderItem method. Class has SetPrePrenderItemFunction method which is used to pass custom processing function that takes 2 parameters: ListItem and HtmlTextWriter objects.   Now update existing RadioButtonList control in Default.aspx: add this to beginning of the page:   <%@ Register Namespace="Sample.Controls" TagPrefix="uc1" %>   and update the control to:   <uc1:CustomRadioButtonList ID="customRbl" runat="server" DataValueField="Id" DataTextField="Name"            RepeatLayout="Flow" RepeatDirection="Horizontal"></uc1:CustomRadioButtonList>   Now, from codebehind of the page:   Add regular expression that will be used for parsing:   private Regex _regex = new Regex(@"(?:[<]space padcount\s*?=\s*?(?:'|"")(?<padcount>\d+)(?:(?:\s+)?px)?(?:'|"")\s*?/>)(?<content>.*)?", RegexOptions.IgnoreCase | RegexOptions.Compiled);   and finally setup the processing function in Page_Load:   protected void Page_Load(object sender, EventArgs e)    {        customRbl.DataSource = DataObjects;         customRbl.SetPrePrenderItemFunction((listItem, writer) =>        {            Match match = _regex.Match(listItem.Text);            if (match.Success)            {                writer.Write(string.Format(@"<span style=""padding-left:{0}"">Extreme values: </span>", match.Groups["padcount"].Value + "px"));                 // if you need to pad listitem use code below                //x.Attributes.CssStyle.Add("padding-left", match.Groups["padcount"].Value + "px");                 // remove meta tag from text                listItem.Text = match.Groups["content"].Value;            }        });         customRbl.DataBind();    }   That's it! :)   Run the attached sample application:     P.S.: of course several other approaches could have been used for that purpose including events and the functionality for processing could also be embedded inside control itself. Current solution suits slightly better due some other reasons for situation where it was used, in your case consider this as a kick start for your own implementation :)   Source application: CustomRadioButtonList.zip

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • Assign programs permanently to different sound-outputs – PulseAudio.

    - by Mood
    I want to assign Skype input and output to my USB-headset while the rest of my laptop uses the internal sound-card. This is an easy task with PulseAudio Volume control (pavucontrol). The only problem I have is every time a call is made I manually have to set the output and input for Skype to my USB-device . When I hang up, Skype disappears from Volume Control. It reappears again with the next call only this time the default sound-card is selected again. It shouldn’t be hard to let PulseAudio look or the USB-headset is connected when Skype audio comes is before selecting the default. The way to do it is obvious not through Volume Control.

    Read the article

  • JavaScript JSON Error While Tabbing in ASP.NET MVC

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/javascript-json-error-while-tabbing-in-asp.net-mvc.aspxI sometimes don’t care about validation for a specific control. The RememberMe control in the login form, for example, really doesn’t need validation, so I forget to include the Html.ValidationMessageFor helper line for that control in particular. As a result, when I’m debugging using IE, I get a silly JSON parsing exception when changing focus from one field to another. The exception doesn’t hurt anything, as far as I know, but it’s just plain annoying. If you’re getting this error, and you don’t want validation messages showing up for controls on a form, you can put them in div tags and set the display style on the divs to none. When I have a handful of controls that I don’t want the validation messages for, I just throw them all in the same div and hide it.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Superpower Your Touchpad Computer with Scrybe

    - by Matthew Guay
    Are you looking for a way to help your Touchpad computer make you more productive?  Here’s a quick look at Scrybe, a new application from Synaptics that lets you superpower it. Touchpad devices have become increasingly more interesting as they’ve included support for multi-touch gestures.  Scrybe takes it to the next level and lets you use your touchpad as an application launcher.  You can launch any application, website, or complete many common commands on your computer with a simple gesture.  Scrybe works with most modern Synaptics touchpads, which are standard on most laptops and netbooks.  It is optimized for newer multi-touch touchpads, but can also work with standard single-touch touchpads.  It works on Windows 7, Vista, and XP, so chances are it will work with your laptop or netbook. Get Started With Scrybe Head over to the Scrybe website and download the latest version (link below).  You are asked to enter your email address, name, and information about your computer…but you actually only have to enter your email address.  Click Download when finished. Run the installer when it’s download.  It will automatically download the latest Synaptics driver for your touchpad and any other components needed for Scrybe.  Note that the Scrybe installer will ask to install the Yahoo! toolbar, so uncheck this to avoid adding this worthless browser toolbar. Using Scrybe To open an application or website with a gesture, press 3 fingers on your touchpad at once, or if your touchpad doesn’t support multi-touch gestures, then press Ctrl+Alt and press 1 finger on your touchpad.  This will open the Scrype input pane; start drawing a gesture, and you’ll see it on the grey square.  The input pane shows some default gestures you can try. Here we drew an “M”, which opens our default Music player.  As soon as you finish the gesture and lift up your finger, Scrybe will open the application or website you selected. A notification balloon will let you know what gesture was preformed. When you’re entering your gesture, the input pane will show white “ink”.  The “ink” will turn blue if the command is recognized, but will turn red if it isn’t.  If Scrybe doesn’t recognize your command, press 3 fingers and try again. Scrybe Control Panel You can open the Scrybe Control panel to enter or change commands by entering a box-like gesture, or right-clicking the Scrybe icon in your system tray and selecting “Scrybe Control Panel”. Scrybe has many pre-configured gestures that you can preview and even practice. All of the gestures in the Popular tab are preset and cannot be changed.  However, the ones in the favorites tab can be edited.  Select the gesture you wish to edit, and click the gear icon to change it.  Here we changed the email gesture to open Hotmail instead of the default Yahoo Mail. Scrybe can also help you perform many common Windows commands such as Copy and Undo.  Select the Tools tab to see all of these commands.   Scrybe has many settings you may wish to change.  Select the Preferences button in the Control Panel to change these.  Here’s some of the settings we changed. Uncheck “Display a message” to turn off the tooltip notifications when you enter a gesture Uncheck “Show symbol hints” to turn off the sidebar on the input pane Select the search engine you want to open with the Search Gesture.  The default is Yahoo, but you can choose your favorite. Adding a new Scrybe Gesture The default Scrybe options are useful, but the best part is that you can assign gestures to your own programs or websites.  Open the Scrybe control panel, and click the plus sign on the bottom left corner.  Enter a name for your gesture, and then choose if it is for a website or an application. If you want the gesture to open a website, enter the address in the box. Alternately, if you want your gesture to open an application, select Launch Application and then either enter the path to the application, or click the button beside the Launch field and browse to it. Now click the down arrow on the blue box and choose one of the gestures for your application or website. Your new gesture will show up under the Favorites tab in the Scrybe control panel, and you can use it whenever you want from Scrybe, or practice the gesture by selecting the Practice button. Conclusion If you enjoy multi-touch gestures, you may find Scrybe very useful on your laptop or netbook.  Scrybe recognizes gestures fairly easily, even if you don’t enter them perfectly correctly.  Just like pinch-to-zoom and two-finger scroll, Scrybe can quickly become something you miss on other laptops. Download Scrybe (registration required) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadRemove Synaptics Touchpad Icon from System TrayRoll Back Troublesome Device Drivers in Windows VistaChange Your Computer Name in Windows 7 or VistaLet Somebody Use Your Computer Without Logging Off in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Coded ui to measure performance

    - by Mike Weber
    I have been tasked with using coded UI to measure performance on a proprietary windows desktop application. The need is to measure how long it takes for the next page/screen to display after a user clicks on a control. For example - a user enters their ID and PW and clicks sign-in. The need is to measure how long it takes for the next screen to display when the user clicks the sign-in button. I understand the need to define what indicates the screen is loaded and ready for use. One approach is to use control.WaitForControlReady and use BeginTimer/EndTimer. Is coded ui a dependable and accurate way of measuring time? Is WaitForControlReady the best method to determine when a control is ready for use?

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >