Search Results

Search found 14206 results on 569 pages for 'compressed folder'.

Page 473/569 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Apache2 mod_proxy to remote Tomcat7 - slow response

    - by 12N
    Been stuck with this one for a few days. Will try to provide as much information as possible, but please feel free to ask for extra detail. I have 2 VMs behind a NAT, 192.168.0.100 and 192.168.0.102, both running Ubuntu 11.04 x64. The first one is mapped to the exterior and is our webserver, has one Apache/2.2.17 install with several vhosts to serve static content, and there's also mod_jk for load balancing. The second one has a tomcat 7 install with several J2EE REST webservices but no apache - requests are expected to be passed directly from .100 apache to .102 tomcat. It is my intention to prepare a tomcat clustered environment. My problem: Requests reach to 192.168.0.100 with no trouble whatsoever, but then take about... 100 seconds for data to actually arrive to .102 - by that time apache has already timeouted, but tomcat receives and processes the request pretty normally. This happens both when using mod_jk, mod_proxy, or mod_ajp_proxy. No idea why, since there are no firewalls in either of the machines, both are pingable - more than that, there are NFS shares active working like a charm - and a mod_proxy experience shown that requests originating directly from .100 are processed normally. Also, to add insult to injury, a similar environment is set up at our office network. Everything works perfectly. -_- The only difference? We have no ip translation at the office and do everything by internal addresses - dunno if that's relevant in any way. Some configs: Apache vhost: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ ServerName www.example.com ProxyRequests Off <Proxy *> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Proxy> ProxyPass /bork http://192.168.0.102:8080/bork ProxyPassReverse /bork http://192.168.0.102:8080/bork LogLevel debug CustomLog ${APACHE_LOG_DIR}/api_access.log combined ErrorLog ${APACHE_LOG_DIR}/api_error.log </VirtualHost> Tomcat connectors <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" /> And a debug log from apache, from a test using mod_proxy_ajp. The behavior is pretty much the same in mod_proxy, at least regarding the delay. Please note that tomcat eventually receives and processes the request, more or less when the log starts being updated again: [Sun May 06 14:40:33 2012] [debug] proxy_util.c(1506): [client 188.81.234.2] proxy: ajp: found worker ajp://192.168.0.102:8008/bork for ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap [Sun May 06 14:40:33 2012] [debug] mod_proxy.c(1015): Running scheme ajp handler (attempt 0) [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(661): proxy: AJP: serving URL ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2011): proxy: AJP: has acquired connection for (192.168.0.102) [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2067): proxy: connecting ajp://192.168.0.102:8008/bork/SSOIdentityProviderSoap to 192.168.0.102:8008 [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2193): proxy: connected /bork/SSOIdentityProviderSoap to 192.168.0.102:8008 [Sun May 06 14:40:33 2012] [debug] proxy_util.c(2444): proxy: AJP: fam 2 socket created to connect to 192.168.0.102 [Sun May 06 14:40:33 2012] [debug] ajp_header.c(224): Into ajp_marshal_into_msgb [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[0] [Accept-Encoding] = [gzip,deflate] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[1] [Content-Type] = [text/xml;charset=UTF-8] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[2] [SOAPAction] = [""] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[3] [User-Agent] = [Jakarta Commons-HttpClient/3.1] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[4] [Host] = [www.example.com] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(290): ajp_marshal_into_msgb: Header[5] [Content-Length] = [520] [Sun May 06 14:40:33 2012] [debug] ajp_header.c(450): ajp_marshal_into_msgb: Done [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(267): proxy: APR_BUCKET_IS_EOS [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(272): proxy: data to read (max 8186 at 4) [Sun May 06 14:40:33 2012] [debug] mod_proxy_ajp.c(287): proxy: got 520 bytes of data [Sun May 06 14:40:33 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 06 [Sun May 06 14:40:33 2012] [debug] ajp_header.c(697): ajp_parse_type: got 06 [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 2 in child 5916 for worker ajp://192.168.0.100:8008/coding [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.100:8008/coding already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5916 for (192.168.0.100) [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5916 for worker http://192.168.0.102:8080 [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5916 for (192.168.0.102) [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5916 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:37 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5916 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5918 for (192.168.0.100) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5918 for worker http://192.168.0.102:8080 [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5918 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5918 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5918 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 2 in child 5917 for worker ajp://192.168.0.100:8008/coding [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.100:8008/coding already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 2 in child 5917 for (192.168.0.100) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 3 in child 5917 for worker http://192.168.0.102:8080 [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker http://192.168.0.102:8080 already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 3 in child 5917 for (192.168.0.102) [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1818): proxy: grabbed scoreboard slot 4 in child 5917 for worker ajp://192.168.0.102:8008/bork [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1837): proxy: worker ajp://192.168.0.102:8008/bork already initialized [Sun May 06 14:40:38 2012] [debug] proxy_util.c(1934): proxy: initialized single connection worker 4 in child 5917 for (192.168.0.102) [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 04 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 04 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(516): ajp_unmarshal_response: status = 200 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(537): ajp_unmarshal_response: Number of headers is = 1 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(599): ajp_unmarshal_response: Header[0] [Content-Type] = [text/xml;charset=utf-8] [Sun May 06 14:42:09 2012] [debug] ajp_header.c(609): ajp_unmarshal_response: ap_set_content_type done [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 03 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(687): ajp_read_header: ajp_ilink_received 05 [Sun May 06 14:42:09 2012] [debug] ajp_header.c(697): ajp_parse_type: got 05 [Sun May 06 14:42:09 2012] [debug] mod_deflate.c(615): [client 188.81.234.2] Zlib: Compressed 447 to 255 : URL /bork/SSOIdentityProviderSoap [Sun May 06 14:42:09 2012] [debug] mod_proxy_ajp.c(570): proxy: got response from (null) (192.168.0.102) [Sun May 06 14:42:09 2012] [debug] proxy_util.c(2029): proxy: AJP: has released connection for (192.168.0.102) [Sun May 06 14:42:09 2012] [info] [client 188.81.234.2] Request body read timeout Was wondering if any one could provide some advice, perhaps even point out any hideous, horrible configuration error? thanks in advance!

    Read the article

  • Xenserver 5.5 U2 a bit unstable with an unstable W2003 VM

    - by twistedbrain
    In the last week I had to reboot the host system twice and the second one by means of the power button. The system is a Dell PE 6950 (4 Opteron dual core, 2,8 Ghz, 16 GB RAM, 900 GB disk in a RAID 10 array composed by 4 450GB 15000 rpm disks) with XenServer 5.5 U2. We're installing it and at now there are working in production a 2003 32 bit Windows server VM with 2 GB RAM, 3 vcpu and about 200 GB disk in 4 partition (12 GB boot, 20 GB program, 80 GB user data, 80 GB other data). The first time I was compressing many Windows 2003 folders (some tens GB by means of W2003 compressed folder option) from a Windows remote console and some hours before my colleague installed Backup Exec agent that was alredy installed and that required a reboot (that was pending). The console stopped responding, it was no more possible to connect by means of remote console or by means of the console of the XenCentre, it was still possible from the network to use the shared folder of the VM and the programs on it (2 db and a GIS program), but the print server didn't work any more and I couldn't give remote reboot from other domain controller hosts. I couldn't stop the virtual machine neither from the XenCentre, neither from the command line of the host also forcing the reboot. I had to reboot the host server. Yesterday it has been worst. I installed a template of another VM, CentOS 5.3 and then, put the DVD in the drive of the host. Then, before the install and after the boot I checked for defect the DVD and the W2003 VM began to respond slowly (I was connected by means of an administration remote console) the task manager showed only mid or low load, it is, only the first of the 3 vcpu was loaded (about 70%), while the other two were about at 20% and also the disk I/O was not so heavy. Then the users were not so happy because they couldn't any more use the MS word docs on the server. I immediately stopped the check of the DVD (to do that I had to force the stop of such Centos 5.3 VM), then some users could again use their docs, but other had still problems, so I decided to reboot the VM, but it doesn't stopped, neither from XenCenter, neither from command line, neither forcing the reboot. Then I tried to reboot of the host, but it didn't worked, neither from the XenCentre, neither from the host prompt (shutdown -r now as root: it told that it was shutting down, but then it didn't did that). So I had to power off by means of the power button of the server (before I tried some Magic SyS REQ, but I saw that Xen isn't compiled with this option enabled). What could you suggest about my problem, what can I look, search and see? In the W2003 VM logs there are no errors or warning to explain what happened. Some more exciting, amusing and inspiring words of poetry (the facts happened around 11 am): \# egrep -i 'err|warning' xensource.log [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: watching xenstore paths: [ /xapi/0/frontend/vbd/51712/hotplug; /local/domain/0/backend/vbd/0/51712/tapdisk-error ] with timeout 1200.000000 seconds [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: fired on /local/domain/0/backend/vbd/0/51712/tapdisk-error [20100219 10:32:14.314|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: watching xenstore paths: [ /local/domain/0/backend/vbd/0/51712/shutdown-done; /local/domain/0/error/device/vbd/51712/error ] with timeout 1200.000000 seconds [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/0 [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/device/vbd/51712 [20100219 10:32:14.338|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: fired on /local/domain/0/error/device/vbd/51712/error [20100219 10:53:48.903|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 14 guest session [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51744 [20100219 10:53:52.085|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.086|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51728 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51712 [20100219 10:53:52.127|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vbd/14 [20100219 10:53:52.128|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51760 [20100219 10:53:52.496|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/0 [20100219 10:53:53.385|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/1 [20100219 10:53:53.389| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/0 [20100219 10:53:53.391| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/1 [20100219 11:20:49.766|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 11 guest session [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/832 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/768 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/11 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/5696 [20100219 11:20:50.753|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vif/11 [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vif/1 [20100219 11:20:50.757| warn|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|hotplug] Warning, deleting 'vif' entry from /xapi/11/hotplug/vif/1 [20100219 11:28:13.803|debug|culo|6610 inet-RPC|Connection to VM console R:e9f8b76e8975|console] error: INTERNAL_ERROR: [ Unix.Unix_error(63, "connect", "") ]

    Read the article

  • Installing Oracle 11gR2 on RHEL 6.2

    - by Chris
    Hello all I'm having some difficulty installing Oracle 11gR2 on RHEL 6.2 I have compiled a giant list of every single step I have taken so far I installed RHEL 6.2 on VMWARE it did it's easy install automatically I Selected 4gb of memory Selected max size of 80Gb Selected 2 processors Sorry for the bad styling copy paste isn't working correctly The version of oracle i downloaded is Linux x86-64 11.2.0.1 I am installing this on a local machine NOT a remote machine I followed the following documentation http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm I bolded the steps which I was least sure about from my research Easy installed with RHEL 6.2 for VMWARE Registered with red hat so I can get updates Reinstalled vmware-tools by pressing enter at every choice Sudo yum update at the end something about GPG key selected y then y Checked Memory Requirements grep MemTotal /proc/meminfo MemTotal: 3921368 kb uname -m x86_64 grep SwapTotal /proc/meminfo SwapTotal: 6160376 kb free total used free shared buffers cached Mem: 3921368 2032012 1889356 0 76216 1533268 -/+ buffers/cache: 422528 3498840 Swap: 6160376 0 6160376 df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 1.9G 276K 1.9G 1% /dev/shm df -h /tmp Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / tmpfs 1.9G 276K 1.9G 1% /dev/shm /dev/sda1 291M 58M 219M 21% /boot All looked fine to me except maybe for swap? Software Requirements cat /proc/version Linux version 2.6.32-220.el6.x86_64 ([email protected]) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Wed Nov 9 08:03:13 EST 2011 uname -r 2.6.32-220.el6.x86_64 (same as above but whatever) According to the tutorial should be On Red Hat Enterprise Linux 6 2.6.32-71.el6.x86_64 or later These are the versions of software I have installed binutils-2.20.51.0.2-5.28.el6.x86_64 compat-libcap1-1.10-1.x86_64 compat-libstdc++-33-3.2.3-69.el6.x86_64 compat-libstdc++-33.i686 0:3.2.3-69.el6 gcc-4.4.6-3.el6.x86_64 gcc-c++.x86_64 0:4.4.6-3.el6 glibc-2.12-1.47.el6_2.12.x86_64 glibc-2.12-1.47.el6_2.12.i686 glibc-devel-2.12-1.47.el6_2.12.x86_64 glibc-devel.i686 0:2.12-1.47.el6_2.12 ksh.x86_64 0:20100621-12.el6_2.1 libgcc-4.4.6-3.el6.x86_64 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.x86_64 libstdc++.i686 0:4.4.6-3.el6 libstdc++-devel.i686 0:4.4.6-3.el6 libstdc++-devel-4.4.6-3.el6.x86_64 libaio-0.3.107-10.el6.x86_64 libaio-0.3.107-10.el6.i686 libaio-devel-0.3.107-10.el6.x86_64 libaio-devel-0.3.107-10.el6.i686 make-3.81-19.el6.x86_64 sysstat-9.0.4-18.el6.x86_64 unixODBC-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.i686 unixODBC-2.2.14-11.el6.i686 8. Probably screwed up here or step 9 /usr/sbin/groupadd oinstall /usr/sbin/groupadd dba(not sure why this isn't in the tutorial) /usr/sbin/useradd -g oinstall -G dba oracle passwd oracle /sbin/sysctl -a | grep sem Xkernel.sem = 250 32000 32 128 /sbin/sysctl -a | grep shm kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmni = 4096 vm.hugetlb_shm_group = 0 /sbin/sysctl -a | grep file-max Xfs.file-max = 384629 /sbin/sysctl -a | grep ip_local_port_range Xnet.ipv4.ip_local_port_range = 32768 61000 /sbin/sysctl -a | grep rmem_default Xnet.core.rmem_default = 124928 /sbin/sysctl -a | grep rmem_max Xnet.core.rmem_max = 131071 /sbin/sysctl -a | grep wmem_max Xnet.core.wmem_max = 131071 /sbin/sysctl -a | grep wmem_default Xnet.core.wmem_default = 124928 Here is my sysctl.conf file I only added the items that were bigger: Kernel sysctl configuration file for Red Hat Linux # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and sysctl.conf(5) for more details. Controls IP packet forwarding net.ipv4.ip_forward = 0 Controls source route verification net.ipv4.conf.default.rp_filter = 1 Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 Controls whether core dumps will append the PID to the core filename. Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 /sbin/sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 su - oracle ulimit -Sn 1024 ulimit -Hn 1024 ulimit -Su 1024 ulimit -Hu 30482 ulimit -Su 1024 ulimit -Ss 10240 ulimit -Hs unlimited su - nano /etc/security/limits.conf *added to the end of the file * oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 exit exit su - mkdir -p /app/ chown -R oracle:oinstall /app/ chmod -R 775 /app/ 9. THIS IS PROBABLY WHERE I MESSED UP I then exited out of the root account so now I'm back in my account chris then I su - oracle echo $SHELL /bin/bash umask 0022 (so it should be set already to what is neccesary) Also from what I have read I do not need to set the DISPLAY variable because I'm installing this on the localhost I then opened the .bash_profile of the oracle and changed it to the following .bash_profile Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi User specific environment and startup programs PATH=$PATH:$HOME/bin; export PATH ORACLE_BASE=/app/oracle ORACLE_SID=orcl export ORACLE_BASE ORACLE_SID I then shutdown the virtual machine shared my desktop folder from my windows 7 then turned back on the virtual machine logged in as chris opened up a terminal then: su - for some reason the shared folder didn't appear so I reinstalled vmware tools again and restarted then same as before su - cp -R linux_oracle/database /db; chown -R oracle:oinstall /db; chmod -R 775 /db; ll /db drwxrwxr-x. 8 oracle oinstall 4096 Jun 5 06:20 database exit su - oracle cd /db/database ./runInstaller AND FINALLY THE INFAMOUS JAVA:132 ERROR MESSAGE Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 65646 MB Passed Checking swap space: must be greater than 150 MB. Actual 6015 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-06-05_06-47-12AM. Please wait ...[oracle@localhost database]$ Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/OraInstall2012-06-05_06-47-12AM/jdk/jre/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1647) at java.lang.Runtime.load0(Runtime.java:769) at java.lang.System.load(System.java:968) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1668) at java.lang.Runtime.loadLibrary0(Runtime.java:822) at java.lang.System.loadLibrary(System.java:993) at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Toolkit.java:1509) at java.awt.Toolkit.(Toolkit.java:1530) at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source) at com.jgoodies.looks.LookUtils.(Unknown Source) at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:122) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:242) at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783) at javax.swing.UIManager.setLookAndFeel(UIManager.java:480) at oracle.install.commons.util.Application.startup(Application.java:758) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181) at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265) at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:114) at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:132)

    Read the article

  • Mandatory profile on Terminal server fails to load. Userenv.log debug

    - by Datapimp23
    Hi, We're having a lot of corrupted profiles lately on our profile share. At the moment I have no clue why, but I decided to switch to one mandatory profile since the users can all use the same and there is no need to have seperate profiles for each user. Here's what I did. I logged into the Terminal server with a new user and configured some stuff (imported certificates and a few files). Then I logged out. Later as admin I copied the profile to another server and renamed it to bsilo. I made sure the user hive settings were adjusted. Everyone had access to the hive. I shared the bsilo folder with full access for everyone. I set the NTFS permissions to read, read & execute, list folder contents for domain users. I also renamed NTUSER.DAT to NTUSER.MAN. Now I set a env variable %manprofile% on the Terminal server that points to \server\bsilo\ntuser.man I set the env var as terminal services profile path for a test user. When I log in I as the user get the following output The system cannot find the path specified. Can somebody point me in the right direction. Thanks USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised Machine Mutex/Events USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised User Mutex/Events USERENV(1774.d18) 15:52:39:724 LibMain: Process Name: \??\C:\WINDOWS\system32\winlogon.exe USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Yes, we can impersonate the user. Running as self USERENV(1774.d18) 15:52:48:005 ========================================================= USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Entering, hToken = <0x340, lpProfileInfo = 0x6e5d8 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(1774.d18) 15:52:48:005 LoadUserProfile: NULL server name USERENV(1774.d18) 15:52:48:005 LoadUserProfile: no thread token found, impersonating self. USERENV(1774.d18) 15:52:48:005 GetInterface: Returning rpc binding handle USERENV(218.2f94) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.2f94) 15:52:48:005 DropClientContext: Got client token 000009B4, sid = S-1-5-18 USERENV(218.2f94) 15:52:48:005 MIDL_user_allocate enter USERENV(218.2f94) 15:52:48:005 DropClientContext: load profile object successfully made USERENV(218.2f94) 15:52:48:005 DropClientContext: Returning 0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Calling DropClientToken (as self) succeeded USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Cookie generated USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Endpoint generated USERENV(218.1f38) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.1f38) 15:52:48:020 LoadUserProfileI: RPC end point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:48:020 In LoadUserProfileP USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Running as client, sid = S-1-5-18 USERENV(218.1f38) 15:52:48:020 ========================================================= USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Entering, hToken = <0x98c, lpProfileInfo = 0x9c940 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(218.1f38) 15:52:48:020 LoadUserProfile: NULL server name USERENV(218.1f38) 15:52:48:020 LoadUserProfile: User sid: S-1-5-21-807756564-1922302612-1565739477-22627 USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: No existing entry found USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: New entry created USERENV(218.1f38) 15:52:48:020 CHashTable::HashAdd: S-1-5-21-807756564-1922302612-1565739477-22627 added in bucket 11 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Wait succeeded. In critical section. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 TestIfUserProfileLoaded: return with error 2. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 LoadUserProfile: Expanded profile path is \server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Entering, lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: checking x-forest logon, user handle = 2444 USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: policy set to disable XForest check USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Mandatory profile (.man extension) USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Try to bypass CSC USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: tried NPAddConnection3ForCSCAgent. Error 2109 USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Share \server\bsilo mapped to drive E. Returned Path E:\ntuser.man USERENV(218.1f38) 15:52:49:239 ParseProfilePath: CSC bypassed. Profile path E:\ntuser.man USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Tick Count = 0 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: GetFileAttributes found something with attributes <0x2022 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Found a file USERENV(218.1f38) 15:52:49:255 ReportError: Impersonating user. USERENV(218.1f38) 15:52:49:255 ReportError: Logging Error DETAIL - The system cannot find the path specified. USERENV(218.1f38) 15:52:49:255 GetInterface: Returning rpc binding handle USERENV(218.1f38) 15:52:49:255 ReportError: RPC End point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:49:255 ReportError: waiting on rpc async event USERENV(1774.2398) 15:52:49:255 ErrorDialogEx: Calling DialogBoxParam USERENV(1774.2398) 15:52:49:270 ErrorDlgProc:: DialogBoxParam USERENV(218.1f38) 15:52:52:177 RpcAsyncCompleteCall finished, status = 0 USERENV(218.1f38) 15:52:52:177 ReleaseInterface: Releasing rpc binding handle USERENV(218.1f38) 15:52:52:177 LoadUserProfile: ParseProfilePath returned FALSE USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Cancelling connection of E: USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Connection deleted. USERENV(218.1f38) 15:52:52:177 CSyncManager::LeaveLock USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock released USERENV(218.1f38) 15:52:52:192 CHashTable::HashDelete: S-1-5-21-807756564-1922302612-1565739477-22627 deleted USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock deleted USERENV(218.1f38) 15:52:52:192 LoadUserProfile: 003 About Reverted back to user <00000000 USERENV(218.1f38) 15:52:52:192 LoadUserProfile: Leaving with a value of 0. USERENV(218.1f38) 15:52:52:192 ========================================================= USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: LoadUserProfileP failed with 3 USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: returning 3 USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Running as self USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Calling LoadUserProfileI failed. err = 3 USERENV(218.200c) 15:52:52:192 IProfileSecurityCallBack: client authenticated. USERENV(218.200c) 15:52:52:192 ReleaseClientContext: Releasing context USERENV(218.200c) 15:52:52:192 ReleaseClientContext_s: Releasing context USERENV(218.200c) 15:52:52:192 MIDL_user_free enter USERENV(1774.d18) 15:52:52:192 ReleaseInterface: Releasing rpc binding handle USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Returning FALSE. Error = 3

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database

    - by pinaldave
    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it. VDS Technologies has all the spatial files for every location for free. You can download the spatial file from here. If you cannot find the spatial file you are looking for, please leave a comment here, and I will send you the necessary details. Unzip the file to a folder and it will have the following content. Then, download Shape2SQL tool from SharpGIS. This is one of the best tools available to convert shapefiles to SQL tables. Afterwards, run the .exe file. When the file is run for the first time, it will ask for the database properties. Provide your database details. Select the appropriate shape files and the tool will fill up the essential details automatically. If you do not want to create the index on the column, uncheck the box beside it. The screenshot below is simply explains the procedure. You also have to be careful regarding your data, whether that is GEOMETRY or GEOGRAPHY. In this example,  it is GEOMETRY data. Click “Upload to Database”. It will show you the uploading process. Once the shape file is uploaded, close the application and open SQL Server Management Studio (SSMS). Run the following code in SSMS Query Editor. USE Spatial GO SELECT * FROM dbo.world GO This will show the complete map of world after you click on Spatial Results in Spatial Tab. In Spatial Results Set, the Zoom feature is available. From the Select label column, choose the country name in order to show the country name overlaying the country borders. Let me know if this tutorial is helpful enough. I am planning to write a few more posts about this later. Note: Please note that the images displayed here do not reflect the original political boundaries. These data are pretty old and can probably draw incorrect maps as well. I have personally spotted several parts of the map where some countries are located a little bit inaccurately. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Spatial, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Uninstalling Reporting Server 2008 on Windows Server 2008

    - by Piotr Rodak
    Ha. I had quite disputable pleasure of installing and reinstalling and reinstalling and reinstalling – I think about 5 times before it worked – Reporting Server 2008 on Windows Server with the same year number in name. During my struggle I came across an error which seems to be not quite unfamiliar to some more unfortunate developers and admins who happen to uninstall SSRS 2008 from the server. I had the SSRS 2008 installed as named instance, SQL2008. I wanted to uninstall the server and install it to default instance. And this is when it bit me – not the first time and not the last that day . The setup complained that it couldn’t access a DLL: Error message: TITLE: Microsoft SQL Server 2008 Setup ------------------------------ The following error has occurred: Access to the path 'C:\Windows\SysWOW64\perf-ReportServer$SQL2008-rsctr.dll' is denied. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22&EvtType=0x60797DC7%25400x84E8D3C0 ------------------------------ BUTTONS: OK This is a screenshot that shows the above error: This issue seems to have a bit of literature dedicated to it and even seemingly a KB article http://support.microsoft.com/kb/956173 and a similar Connect item: http://connect.microsoft.com/SQLServer/feedback/details/363653/error-messages-when-upgrading-from-sql-2008-rc0-to-rtm The article describes issue as following: When you try to uninstall Microsoft SQL Server 2008 Reporting Services from the server, you may receive the following error message: An error has occurred: Access to the path 'Drive_Letter:\WINDOWS\system32\perf-ReportServer-rsctr.dll' is denied. Note Drive_Letter refers to the disc drive into which the SQL Server installation media is inserted. In my case, the Note was not true; the error pointed to a dll that was located in Windows folder on C:\, not where the installation media were. Despite this difference I tried to identify any processes that might be keeping lock on the dll. I downloaded Sysinternals process explorer and ran it to find any processes I could stop. Unfortunately, there was no such process. I tried to rerun the installation, but it failed at the same step. Eventually I decided to remove the dll before the setup was executed. I changed name of the dll to be able to restore it in case of some issues. Interestingly, Windows let me do it, which means that indeed, it was not locked by any process. I ran the setup and this time it uninstalled the instance without any problems:   To summarize my experience I should say – be very careful, don’t leave any leftovers after uninstallation – remove/rename any folders that are left after setup has finished. For some reason, setup doesn’t remove folders and certain files. Installation on Windows Server 2008 requires more attention than on Windows 2003 because of the changed security model, some actions can be executed only by administrator in elevated execution mode. In general, you have to get used to UAC and a bit different experience than with Windows Server 2003. Technorati Tags: SQL Server 2008,Windows Server 2008,SRS,Reporting Services

    Read the article

  • SSIS - Access Denied with UNC paths - The file name is a device or contains invalid characters

    - by simonsabin
    I spent another day tearing my hair out yesterday trying to resolve an issue with SSIS packages runnning in SQLAgent (not got much left at the moment, maybe I should contact the SSIS team for a wig). My situation was that I am deploying packages to a development server, and to provide isolation I was running jobs with a proxy account that only had access to the development servers. Proxies are an awesome feature and mean that you should never have to "just run the job as sysadmin". The issue I was facing was that the job step was failing. The job step was a simple execution of the package.The following errors appeared in my log file. I always check the "Log step output in history" for a job step, this ensures you get all the output from the command that you run. I'll blog about this later. If looking at the output in sysdtslog90 then you will have an entry with datacode -1073573533 and error message File or directory "<filename>" represented by connection "<connection>" does not exist.  Not exactly helpful. If you get the output from the console then you will also get these errors. 0xC0202070 "The file name property is not valid. The file name is a device or contains invalid characters." 0xC001401E "specified in the connection was not valid." It appears this error is due to the use of a UNC path and the account runnnig the package not having access to all the folders in the path. Solution To solve this you need to ensure that the proxy account has access to ALL folders in the path you are accessing. To check this works, logon as the relevant proxy user, or run a command window as the specified user. Then try and do net use \\server\share and then do a dir for each folder in the path and check you have access. If these work and you still have the problem then you have some other problem, sorry. The following are posts on experts exchange that also discuss this,http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_24056047.htmlhttp://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/Q_23968903.html This blog had a post about it being a 64 bit issue. That definitely wasn't the issue for me as I was on a 32 bit server http://blogs.perkinsconsulting.com/post/64-bit-SQL-Server-2005-SSIS-and-UNC-paths-Part-2.aspx  

    Read the article

  • Turn Non-Resizeable Windows into Rezieable Windows

    - by Asian Angel
    Are you frustrated with Windows app windows that can not be resized at all? Now you can apply some “attitude adjustment” and resize those windows with ResizeEnable. Before Everyone is familiar with the many app windows in their Windows OS that simply can not be resized. What you need is cooperation, not attitude. For our example we chose the “Taskbar and Start Menu Properties Window”…notice the cursor in the lower right corner. No resizing satisfaction available at all… After The program comes in a zip file with three files as shown here. Once you have unzipped the program place it in an appropriate “Program Files Folder”, create a shortcut, and you are ready to go. There will be a “System Tray Icon” with only two “Context Menu” items…“About & Quit”. Here is a quick look at the “About Window” that tells you exactly what ResizeEnable does. Notice that it does state that you may occasionally have a window that may not respond correctly. Now back to our “Taskbar and Start Menu Properties Window”. Notice the resizing cursor in the lower right corner….time for some fun! During our test the “Taskbar and Start Menu Properties Window” was suddenly a dream to resize. Daring to stretch the window even further…now that is what you call “stretching” the window out in comparison to its’ original size! Think of all the windows that will be much easier to work with now… Conclusion If you have been frustrated with non-resizeable windows then ResizeEnable will certainly bring a smile to your face as you watch those windows suddenly become a lot more cooperative. This is definitely one app that is worth adding to your system. Links Download ResizeEnable (zip file) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxTurn on Remote Desktop in Windows 7 or VistaSave 1-4% More Battery Life With Windows Vista Battery SaverQuick Tip: Disable Search History Display in Windows 7Turn Off Windows Explorer Click Sounds in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Display System Information on Your Desktop with Desktop Info

    - by Asian Angel
    Do you like to monitor your system but do not want a complicated app to do it with? If you love simplicity and easy configuration then join us as we look at Desktop Info. Desktop Info in Action Desktop Info comes in a zip file format so you will need to unzip the app, place it into an appropriate “Program Files Folder”, and create a shortcut. Do NOT delete the “Read Me File”…this will be extremely useful to you when you make changes to the “Configuration File”. Once you have everything set up you are ready to start Desktop Info up. This is the default layout and set of listings displayed when you start Desktop Info up for the first time. The font colors will be a mix of colors as seen here and the font size will perhaps be a bit small but those are very easy to change if desired. You can access the “Context Menu” directly over the “information area”…so no need to look for it in the “System Tray”. Notice that you can easily access that important “Read Me File” from here… The full contents of the configuration file (.ini file) are displayed here so that you can see exactly what kind of information can be displayed using the default listings. The first section is “Options”…you will most likely want to increase the font size while you are here. Then “Items”… If you are unhappy with any of the font colors in the “information area” this is where you can make the changes. You can turn information display items on or off here. And finally “Files, Registry, & Event Logs”. Here is our displayed information after a few tweaks in the configuration file. Very nice. Conclusion If you have been looking for a system information app that is simple and easy to set up then you should definitely give Desktop Info a try. Links Download Desktop Info Similar Articles Productive Geek Tips Ask the Readers: What are Your Computer’s Hardware Specs?Allow Remote Control To Your Desktop On UbuntuHow To Get Detailed Information About Your PCGet CPU / System Load Average on Ubuntu LinuxEnable Remote Desktop (VNC) on Kubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • Localization in ASP.NET MVC 2 using ModelMetadata

    - by rajbk
    This post uses an MVC 2 RTM application inside VS 2010 that is targeting the .NET Framework 4. .NET 4 DataAnnotations comes with a new Display attribute that has several properties including specifying the value that is used for display in the UI and a ResourceType. Unfortunately, this attribute is new and is not supported in MVC 2 RTM. The good news is it will be supported and is currently available in the MVC Futures release. The steps to get this working are shown below: Download the MVC futures library   Add a reference to the Microsoft.Web.MVC.AspNet4 dll.   Add a folder in your MVC project where you will store the resx files   Open the resx file and change “Access Modifier” to “Public”. This allows the resources to accessible from other assemblies. Internaly, it changes the “Custom Tool” used to generate the code behind from  ResXFileCodeGenerator to “PublicResXFileCodeGenerator”    Add your localized strings in the resx.   Register the new ModelMetadataProvider protected void Application_Start() { AreaRegistration.RegisterAllAreas();   RegisterRoutes(RouteTable.Routes);   //Add this ModelMetadataProviders.Current = new DataAnnotations4ModelMetadataProvider(); DataAnnotations4ModelValidatorProvider.RegisterProvider(); }   Use the Display attribute in your Model public class Employee { [Display(Name="ID")] public int ID { get; set; }   [Display(ResourceType = typeof(Common), Name="Name")] public string Name { get; set; } } Use the new HTML UI Helpers in your strongly typed view: <%: Html.EditorForModel() %> <%: Html.EditorFor(m => m) %> <%: Html.LabelFor(m => m.Name) %> ..and you are good to go. Adventure is out there!

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Make Your 64 bit Computer Look like a Commodore 64

    - by Matthew Guay
    The Commodore 64 was one of the bestselling home computers ever, and many geeks got their first computing experience on one of these early personal computers. Here’s an easy way to revisit the early years of personal computing with a theme for Windows 7. With only 64Kb of ram and an 8 bit processor, the Commodore 64 is light-years behind today’s computers.  But with a Windows 7 themepack, you can turn back the years and give your computer a quick overhaul to look more like its ancient predecessor. Age Windows 7 with a click Download the Commodore 64 theme from PC World (link below), and unzip the files. Now, double-click on the Themepack file to apply the theme. This will open your Personalization panel and will automatically change your system fonts, window style, background, and more. Your desktop will go from your Windows 7 look… to a modified Windows 7 look that is reminiscent of the Commodore 64. Open an application to see all the changes … notice the old-style font in the Window boarder and menus. This theme also changes your Computer, Recycle Bin, and User folder icons to Commodore 64-inspired icons. And, if you want to go back to the standard Windows 7 look and feel, it’s only a click away in the Personalization dialog.  Right-click on your desktop, select Personalize, and then choose the theme you want.   Conclusion Although this doesn’t give you the real look and feel of the Commodore 64, it is still a fun way to experience a bit of computer nostalgia.  There are tons of excellent themes available for Windows 7, so check back for more exciting ways to customize your desktop! Link Download the Commodore 64 theme for Windows 7 Similar Articles Productive Geek Tips Make MSE Create a Restore Point Before Cleaning MalwareMake Ubuntu Automatically Save Changes to Your SessionMake Windows Vista Shut Down Services QuickerChange Your Computer Name in Windows 7 or VistaMake Windows 7 or Vista Log On Automatically TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Dark Side of the Moon (8-bit) Norwegian Life If Web Browsers Were Modes of Transportation Google Translate (for animals) Out of 100 Tweeters Roadkill’s Scan Port scans for open ports

    Read the article

  • Next Phase of ECM 11g Now Available - New UCM & URM 11g, & Updated I/PM & IRM 11g

    - by michelle.huff
    We're excited to announce that the Oracle Enterprise Content Management Suite 11g is now available! Today, Oracle announced ECM Suite 11g, a part of Fusion Middleware 11gR1 Patchset 2 release, which builds upon the Imaging and Process Management (I/PM) and Information Rights Management (IRM) 11g release earlier this year. Universal Content Management (UCM) and Universal Records Management (URM) 11g are now available with many new features and enhancements. All ECM products are localized into 27 languages, use a single repository, a single installer, centralized administration, and all run on the same Fusion Middleware tech stack. Oracle ECM Suite 11g, is better integrated to fit the way you work, with extreme performance and extreme scalability. Universal Content Management One click Web content management - brings Web content management authoring, design and presentation capabilities directly into how organizations design sites, portals, and custom Web applications. Simply take in the right amount of WCM that meets your needs - all without having to rewrite the application or port it over to a new technology stack or framework. Greater business user empowerment - with next generation desktop integrations and "smart productivity folders", new Web site "design mode" for business users, and enhanced rich media support enabling users to better work with photography, graphics, videos & podcasts created today as well as contribute content within Flash files directly from the Web. Advanced manageability with extreme performance & scalability - centralized system monitoring, installation, logging, performance metrics & diagnostics, with new built in "fast check-in" features, redesigned component management interface - all running on Fusion Middleware infrastructure. Universal Records Management Enhanced user experience: Oracle URM 11g makes records management easier for both business users and records administrators. Simplifications in the end user experience allow the creation of bookmarks into often-used part of the file plan, easy copying of categories and dispositions, and integrated folder and records search. The records management dashboard provides a consolidated view into records administrator tasks and system performance. DoD 5015.02 v3: Oracle URM is fully certified against all part of the US Department of Defense records management standard - baseline, classified, and Freedom of Information and Privacy Act. This enables Federal, state, & local governments & public agencies, as well as private companies, to maintain regulated compliance. Expanded functionality through Oracle integrations: Oracle URM 11g allows for an expanded set of functionality through integration capabilities with other Oracle products. This includes configurable records definition capabilities directly within a UCM instance. An out of the box integration with Oracle BI Publisher provides easily configured and robust reporting. Additionally, 11g offers an out of the box Oracle Secure Enterprise Search integration enabling real time full text discovery across disparate systems in an organization. Read the Press Release Watch the 3 Minute ECM 11g Video Get Up to Speed with the What's New in ECM Suite Datasheet Learn More on OTN with new tutorials, downloads and whitepapers

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Mark Messages As Read in the Outlook 2010 Reading Pane

    - by Matthew Guay
    Do you ever feel annoyed that Outlook 2010 doesn’t mark messages as Read as soon as you click and view them in the Reading Pane?  Here we show you how to make Outlook mark them as read as soon as they’re opened. Mark as Read By default, Outlook will not mark a message as read until you select another message.  This can be annoying, because if you read a message and immediately click Delete, it will show up as an unread message in our Deleted Items folder. Let’s change this to make Outlook mark messages as read as soon as we view them in the Reading Pane.  Open Outlook and click File to open Backstage View, and select Options. In Options select Mail on the left menu, and under Outlook panes click on the Reading Pane button. Check the box Mark items as read when viewed in the Reading Pane to make Outlook mark your messages as read when you view them in the Reading Pane.  By default, Outlook will only mark a message read after you’ve been reading it for 5 seconds, though you can change this.  We set it to 0 seconds so our messages would be marked as read as soon as we select them. Click OK in both dialogs, and now your messages will be marked as read as soon as you select them in the reading pane, or soon after, depending on your settings. Conclusion Outlook 2010 is a great email client, but like most programs it has its quirks.  This quick tip can help you get rid of one of Outlook’s annoying features, and make it work like you want it to. And, if you’re still using Outlook 2007, check out our article on how to Mark Messages as Read When Viewed in Outlook 2007. Similar Articles Productive Geek Tips Make Outlook 2007 Mark Items as Read When Viewed in Reading PaneMake Mail.app’s Reading Pane More Like OutlookIntegrate Twitter With Microsoft OutlookSort Your Emails by Conversation in Outlook 2010Find Emails With Attachments with Outlook 2007’s Instant Search TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app

    Read the article

  • How to Use and Manage Extensions to Safari 5

    - by Mysticgeek
    While there have been hacks to include extensions in Safari for some time now, Safari 5 now offers proper support for them. Today we take a look at managing extensions in the latest version of Safari. Installation and Setup Download and install Safari 5 (link below). Make sure to download the installer that doesn’t include QuickTime if you don’t want it. Also, uncheck getting Apple updates and news in your email. Then decide if you want to install Bonjour for Windows and have Safari automatically update or not. Once it’s installed, launch Safari and select Show Menu Bar from the the Settings Menu. Then go into Preferences \ Advanced and check the box Show Develop menu in the menu bar. Develop will now appear on the Menu Bar…click on it and select Enable Extensions. Using Extensions Now you can find and start using extensions (link below) that will work with Safari 5. In this example we’re installing PageSaver which takes an image of what is showing in your browser. Click on the link for the Extension you want to install…   Then you’ll get a confirmation asking if you want to open or save it. Opening it will install it right away. Click Install in the dialog that asks if you’re sure you want to. Here we see the Extension was successfully installed and you can see the camera icon on the Toolbar. When you’re on a portion of a webpage you want to take an image of, click on the camera icon and you’ll have the image saved in your Downloads folder. Then you can open it up in a browser or image editor. Go into Preferences \ Extensions and from here you can turn the extensions on or off, uninstall, or check for updates. If you’re a Safari user, or thinking about trying it, you’ll enjoy proper support for extensions in version 5. At the time of this writing we couldn’t find any extensions on the Apple site, but you might want to keep your eye on it to see if they do start listing them.  Download Safari 5 for Mac & PC Safari Extensions Similar Articles Productive Geek Tips Manage Web Searches In SafariMake Safari Stop Crashing Every 20 Seconds on Windows VistaCustomize Safari for Windows ToolbarMake Your Safari Web Browsing PrivateSave Screen Space by Hiding the Bookmarks Toolbar in Safari for Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Blender mesh mirroring screws up normals when importing in Unity

    - by Shivan Dragon
    My issue is as follows: I've modeled a robot in Blender 2.6. It's a mech-like biped or if you prefer, it kindda looks like a chicken. Since it's symmetrical on the XZ plane, I've decided to mirror some of its parts instead of re-modeling them. Problem is, those mirrored meshes look fine in Blender (faces all show up properly and light falls on them as it should) but in Unity faces and lighting on those very same mirrored meshes is wrong. What also stumps me is the fact that even if I flip normals in Blender, I still get bad results in Unity for those meshes (though now I get different bad results than before). Here's the details: Here's a Blender screen shot of the robot. I've took 2 pictures and slightly rotated the camera around so the geometry in question can be clearly seen: Now, the selected cog-wheel-like piece is the mirrored mesh obtained from mirroring the other cog-wheel on the other (far) side of the robot torso. The back-face culling is turned of here, so it's actually showing the faces as dictated by their normals. As you can see it looks ok, faces are orientated correctly and light falls on it ok (as it does on the original cog-wheel from which it was mirrored). Now if I export this as fbx using the following settings: and then import it into Unity, it looks all screwy: It looks like the normals are in the wrong direction. This is already very strange, because, while in Blender, the original cog-wheel and its mirrored counter part both had normals facing one way, when importing this in Unity, the original cog-wheel still looks ok (like in Blender) but the mirrored one now has normals inverted. First thing I've tried is to go "ok, so I'll flip normals in Blender for the mirrored cog-wheel and then it'll display ok in Unity and that's that". So I went back to Blender, flipped the normals on that mesh, so now it looks bad in Blender: and then re-exported as fbx with the same settings as before, and re-imported into Unity. Sure enough the cog-wheel now looks ok in Unity, in the sense where the faces show up properly, but if you look closely you'll notice that light and shadows are now wrong: Now in Unity, even though the light comes from the back of the robot, the cog-wheel in question acts as if light was coming from some-where else, its faces which should be in shadow are lit up, and those that should be lit up are dark. Here's some things I've tried and which didn't do anything: in Blender I tried mirroring the mesh in 2 ways: first by using the scale to -1 trick, then by using the mirroring tool (select mesh, hit crtl-m, select mirror axis), both ways yield the exact same result in Unity I've tried playing around with the prefab import settings like "normals: import/calculate", "tangents: import/calculate" I've also tired not exporting as fbx manually from Blender, but just dropping the .blend file in the assets folder inside the Unity project So, my question is: is there a way to actually mirror a mesh in Blender and then have it imported in Unity so that it displays properly (as it does in Blender)? If yes, how? Thank you, and please excuse the TL;DR style.

    Read the article

  • How to update all the SSIS packages&rsquo; Connection Managers in a BIDS project with PowerShell

    - by Luca Zavarella
    During the development of a BI solution, we all know that 80% of the time is spent during the ETL (Extract, Transform, Load) phase. If you use the BI Stack Tool provided by Microsoft SQL Server, this step is accomplished by the development of n Integration Services (SSIS) packages. In general, the number of packages made ??in the ETL phase for a non-trivial solution of BI is quite significant. An SSIS package, therefore, extracts data from a source, it "hammers" :) the data and then transfers it to a specific destination. Very often it happens that the connection to the source data is the same for all packages. Using Integration Services, this results in having the same Connection Manager (perhaps with the same name) for all packages: The source data of my BI solution comes from an Helper database (HLP), then, for each package tha import this data, I have the HLP Connection Manager (the use of a Shared Data Source is not recommended, because the Connection String is wired and therefore you have to open the SSIS project and use the proper wizard change it...). In order to change the HLP Connection String at runtime, we could use the Package Configuration, or we could run our packages with DTLoggedExec by Davide Mauri (a must-have if you are developing with SQL Server 2005/2008). But my need was to change all the HLP connections in all packages within the SSIS Visual Studio project, because I had to version them through Team Foundation Server (TFS). A good scribe with a lot of patience should have changed by hand all the connections by double-clicking the HLP Connection Manager of each package, and then changing the referenced server/database: Not being endowed with such virtues :) I took just a little of time to write a small script in PowerShell, using the fact that a SSIS package (a .dtsx file) is nothing but an xml file, and therefore can be changed quite easily. I'm not a guru of PowerShell, but I managed more or less to put together the following lines of code: $LeftDelimiterString = "Initial Catalog=" $RightDelimiterString = ";Provider=" $ToBeReplacedString = "AstarteToBeReplaced" $ReplacingString = "AstarteReplacing" $MainFolder = "C:\MySSISPackagesFolder" $files = get-childitem "$MainFolder" *.dtsx `       | Where-Object {!($_.PSIsContainer)} foreach ($file in $files) {       (Get-Content $file.FullName) `             | % {$_ -replace "($LeftDelimiterString)($ToBeReplacedString)($RightDelimiterString)", "`$1$ReplacingString`$3"} ` | Set-Content $file.FullName; } The script above just opens any SSIS package (.dtsx) in the supplied folder, then for each of them goes in search of the following text: Initial Catalog=AstarteToBeReplaced;Provider= and it replaces the text found with this: Initial Catalog=AstarteReplacing;Provider= I don’t enter into the details of each cmdlet used. I leave the reader to search for these details. Alternatively, you can use a specific object model exposed in some .NET assemblies provided by Integration Services, or you can use the Pacman utility: Enjoy! :) P.S. Using TFS as versioning system, before running the script I checked out the packages and, after the script executed succesfully, I checked in them.

    Read the article

  • Coherence Based WebLogic Server Session Management

    - by [email protected]
    Specifications Supported Configurations WebLogic Server 10.3.2( or 10.3.1 ) Coherence 3.5.2/463 If you use other verion above, then please check the following matrix:   WebLogic Server 9.2 MP1 Weblogic Server 10.3 WebLogic Smart Update Patch ID: AJQB Patch ID: 6W2W Minimum Coherence Release Level/MetaLink Patch ID 3.4.2 Patch 2-Patch ID:8429415 3.4.2 Patch6-Patch ID:11399293 Environment Variables %COHERENCE_HOME%: coherence installation directory %DOMAIN_HOME%: weblogic domain foler. Instructions We Will create to weblogic domains: domain_a, domain_b. To configure those domains with coherence-based session management . Then the changings of session variable value in one domain will propagate to another domain. Main Steps WebLogic Server create domain_a The process is ignored copy %COHERENCE_HOME%\lib\coherence.jar to %DOMAIN_HOME%\lib startup domain deploy %COHERENCE_HOME%\lib\coherence-web-spi.war as a Shared Library repeat step 1~4 at domain_b Coherence duplicate %COHERENCE_HOME%\bin\cache-server.cmd at the same folder and rename it to web-cache-server.cmd modify web-cache-server.cmd java -server -Xms512m -Xmx512m -cp %coherence_home%/lib/coherence.jar;%coherence_home%/lib/coherence-web-spi.war -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml -Dtangosol.coherence.session.localstorage=true com.tangosol.net.DefaultCacheServer startup web-cache-server.cmd Testing develop a web app  with OEPE or JDeveloper and implment functions: changing, viewing, listing  session variables. ( or download sample codes here ) modify weblogic.xml with following content: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls=http://xmlns.oracle.com/weblogic/weblogic-web-app xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.0/weblogic-web-app.xsd"> <wls:weblogic-version>10.3.2</wls:weblogic-version> <wls:context-root>CoherenceWeb</wls:context-root> <wls:library-ref> <wls:library-name>coherence-web-spi</wls:library-name> <wls:specification-version>1.0.0.0</wls:specification-version> <wls:exact-match>true</wls:exact-match> </wls:library-ref> </wls:weblogic-web-app> deploy the web app to domain_a and domain_b change session varaible vlaue at domain_a and check whethe if changed at domain_b References Using Oracle Coherence*Web 3.4.2 with Oracle WebLogic Server 10gR3 Oracle Coherence*Web 3.4.2 with Oracle WebLogic Server 10gR3

    Read the article

  • Restore Files from Backups on Windows Home Server

    - by Mysticgeek
    If you use Windows Home Server to backup the machines on your network, your in luck if you accidentally delete important files or they become corrupted. Today we take a look at getting your data back from backups on your home server. Open Windows Home Server Console and click select the Computers and Backup tab. Right-click on the computer you need to restore files for and select View Backups. This will open a list of your recent backups. Highlight the one you want to open, then click the Open button in the Restore or View Files section. If this is the first time you’re restoring a file, you’ll be asked to verify installation of the device software. Check the box next to Always trust software from Microsoft Corporation and click Install. Now wait while the backup data is retrieved. After the backup data has been retrieved, an explorer windows opens up to drive (Z:) which is the backup data. It’s just like if you were opening a drive on your local machine. Now you can browse through the backup and find the files your missing. You can open the files directly, or drag them onto your machine to the location you want to restore them.   Restoring your data is actually a very easy process with Windows Home Server. Of course you’ll want to make sure the computers on your network are being backed up to WHS. if you need help with that, check out our article on how to configure your computer to backup to WHS. If you want to backup your home server shares, check out our article on how to backup WHS folder to an external drive. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerRestore Your PC from Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscInstalling Windows Home ServerConfigure Your Computer to Backup to Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • To 'seal' or to 'wrap': that is the question ...

    - by Simon Thorpe
    If you follow this blog you will already have a good idea of what Oracle Information Rights Management (IRM) does. By encrypting documents Oracle IRM secures and tracks all copies of those documents, everywhere they are shared, stored and used, inside and outside your firewall. Unlike earlier encryption products authorized end users can transparently use IRM-encrypted documents within standard desktop applications such as Microsoft Office, Adobe Reader, Internet Explorer, etc. without first having to manually decrypt the documents. Oracle refers to this encryption process as 'sealing', and it is thanks to the freely available Oracle IRM Desktop that end users can transparently open 'sealed' documents within desktop applications without needing to know they are encrypted and without being able to save them out in unencrypted form. So Oracle IRM provides an amazing, unprecedented capability to secure and track every copy of your most sensitive information - even enabling end user access to be revoked long after the documents have been copied to home computers or burnt to CD/DVDs. But what doesn't it do? The main limitation of Oracle IRM (and IRM products in general) is format and platform support. Oracle IRM supports by far the broadest range of desktop applications and the deepest range of application versions, compared to other IRM vendors. This is important because you don't want to exclude sensitive business processes from being 'sealed' just because either the file format is not supported or users cannot upgrade to the latest version of Microsoft Office or Adobe Reader. But even the Oracle IRM Desktop can only open 'sealed' documents on Windows and does not for example currently support CAD (although this is coming in a future release). IRM products from other vendors are much more restrictive. To address this limitation Oracle has just made available the Oracle IRM Wrapper all-format, any-platform encryption/decryption utility. It uses the same core Oracle IRM web services and classification-based rights model to manually encrypt and decrypt files of any format on any Java-capable operating system. The encryption envelope is the same, and it uses the same role- and classification-based rights as 'sealing', but before you can use 'wrapped' files you must manually decrypt them. Essentially it is old-school manual encryption/decryption using the modern classification-based rights model of Oracle IRM. So if you want to share sensitive CAD documents, ZIP archives, media files, etc. with a partner, and you already have Oracle IRM, it's time to get 'wrapping'! Please note that the Oracle IRM Wrapper is made available as a free sample application (with full source code) and is not formally supported by Oracle. However it is informally supported by its author, Martin Lambert, who also created the widely-used Oracle IRM Hot Folder automated sealing application.

    Read the article

  • Illegal characters for SharePoint 2010 Content Type name

    - by Kelly Jones
    Quick tip: you can’t include a backslash in the name of the SharePoint 2010 Content Type.  In fact, there are several illegal characters:  \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. What, you didn’t know that after entering one of these characters in the name?  Is it because you saw this screen: Oh, that’s right….you need to turn off custom errors in the layouts folder…See this blog post for details and you’ll also need to turn off for the web application. Once you do that, you’ll see this: I wonder why the SharePoint team just doesn’t let the user know that the content type name contains illegal characters before the user hits the create button. Here’s a copy of the complete error (for the search engines): Server Error in '/' Application. -------------------------------------------------------------------------------- The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.  Stack Trace: [SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab.]    Microsoft.SharePoint.SPContentType.ValidateName(String name) +27419522    Microsoft.SharePoint.SPContentType.ValidateNameWithResource(String strVal, String& strLocalized) +423    Microsoft.SharePoint.SPContentType.set_Name(String value) +151    Microsoft.SharePoint.SPContentType.Initialize(SPContentType parentContentType, SPContentTypeCollection collection, String name) +112    Microsoft.SharePoint.SPContentType..ctor(SPContentType parentContentType, SPContentTypeCollection collection, String name) +132    Microsoft.SharePoint.ApplicationPages.ContentTypeCreatePage.BtnOK_Click(Object sender, EventArgs e) +497    System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115    System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140    System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981   -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • Adding an expression based image in a client report definition file (RDLC)

    - by rajbk
    In previous posts, I showed you how to create a report using Visual Studio 2010 and how to add a hyperlink to the report.  In this post, I show you how to add an expression based image to each row of the report. This similar to displaying a checkbox column for Boolean values.  A sample project is attached to the bottom of this post. To start off, download the project we created earlier from here.  The report we created had a “Discontinued” column of type Boolean. We are going to change it to display an “available” icon or “unavailable” icon based on the “Discontinued” row value.    Load the project and double click on Products.rdlc. With the report design surface active, you will see the “Report Data” tool window. Right click on the Images folder and select “Add Image..”   Add the available_icon.png and discontinued_icon.png images (the sample project at the end of this post has the icon png files)    You can see the images we added in the “Report Data” tool window.   Drag and drop the available_icon into the “Discontinued” column row (not the header) We get a dialog box which allows us to set the image properties. We will add an expression that specifies the image to display based the “Discontinued” value from the Product table. Click on the expression (fx) button.   Add the following expression : = IIf(Fields!Discontinued.Value = True, “discontinued_icon”, “available_icon”)   Save and exit all dialog boxes. In the report design surface, resize the column header and change the text from “Discontinued” to “In Production”.   (Optional) Right click on the image cell (not header) , go to “Image Properties..” and offset it by 5pt from the left. (Optional) Change the border color since it is not set by default for image columns. We are done adding our image column! Compile the application and run it. You will see that the “In Production” column has red ‘x’ icons for discontinued products. Download the VS 2010 sample project NorthwindReportsImage.zip Other Posts Adding a hyperlink in a client report definition file (RDLC) Rendering an RDLC directly to the Response stream in ASP.NET MVC ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager Localization in ASP.NET MVC 2 using ModelMetadata Setting up Visual Studio 2010 to step into Microsoft .NET Source Code Running ASP.NET Webforms and ASP.NET MVC side by side Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >