Search Results

Search found 16801 results on 673 pages for 'task manager'.

Page 238/673 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Plesk 11: install Apache with SNI support

    - by Ueli
    If I try to update from standard Apache to Apache with SNI support with the Plesk installation program (example.com:8447), I get an error, that I have to remove apr-util-ldap-1.4.1-1.el5.x86_64 It's in german: Informationen über installierte Pakete abrufen... Installation started in background Datei wird heruntergeladen PSA_11.0.9/dist-rpm-CentOS-5-x86_64/build-11.0.9-cos5-x86_64.hdr.gz: 11%..20%..30%..40%..50%..60%..70%..81%..91%..100% fertig. Datei wird heruntergeladen PSA_11.0.9/update-rpm-CentOS-5-x86_64/update-11.0.9-cos5-x86_64.hdr.gz: 10%..20%..30%..40%..50%..60%..70%..80%..90%..100% fertig. Datei wird heruntergeladen PSA_11.0.9/thirdparty-rpm-CentOS-5-x86_64/thirdparty-11.0.9-cos5-x86_64.hdr.gz: 10%..26%..43%..77%..100% fertig. Datei wird heruntergeladen BILLING_11.0.9/thirdparty-rpm-RedHat-all-all/thirdparty-11.0.9-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen BILLING_11.0.9/update-rpm-RedHat-all-all/update-11.0.9-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen SITEBUILDER_11.0.10/thirdparty-rpm-RedHat-all-all/thirdparty-11.0.10-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen SITEBUILDER_11.0.10/dist-rpm-RedHat-all-all/build-11.0.10-rhall-all.hdr.gz: 10%..22%..31%..41%..51%..65%..70%..80%..90%..100% fertig. Datei wird heruntergeladen SITEBUILDER_11.0.10/update-rpm-RedHat-all-all/update-11.0.10-rhall-all.hdr.gz: 100% fertig. Datei wird heruntergeladen APACHE_2.2.22/thirdparty-rpm-CentOS-5-x86_64/thirdparty-2.2.22-rh5-x86_64.hdr.gz: 19%..25%..35%..83%..93%..100% fertig. Datei wird heruntergeladen APACHE_2.2.22/update-rpm-CentOS-5-x86_64/update-2.2.22-rh5-x86_64.hdr.gz: 100% fertig. Datei wird heruntergeladen BILLING_11.0.9/dist-rpm-RedHat-all-all/build-11.0.9-rhall-all.hdr.gz: 11%..23%..31%..41%..52%..62%..73%..83%..91%..100% fertig. Datei wird heruntergeladen APACHE_2.2.22/dist-rpm-CentOS-5-x86_64/build-2.2.22-rh5-x86_64.hdr.gz: 36%..50%..100% fertig. Pakete, die installiert werden müssen, werden ermittelt. -> Error: Mit der Installation kann erst fortgefahren werden, wenn das Paket apr-util-ldap-1.4.1-1.el5.x86_64 vom System entfernt wird. Es wurden nicht alle Pakete installiert. Bitte beheben Sie dieses Problem und versuchen Sie, die Pakete erneut zu installieren. Wenn Sie das Problem nicht selbst beheben können, wenden Sie sich bitte an den technischen Support. - «Error: The installation can be continued only if the package apr-util-ldap-1.4.1-1.el5.x86_64 is removed from the system» But I can't uninstall apr-util-ldap-1.4.1-1.el5.x86_64 without removing a lot of important packages: Dependencies Resolved ========================================================================================================================================= Package Arch Version Repository Size ========================================================================================================================================= Removing: apr-util-ldap x86_64 1.4.1-1.el5 installed 9.0 k Removing for dependencies: SSHTerm noarch 0.2.2-10.12012310 installed 4.9 M awstats noarch 7.0-11122114.swsoft installed 3.5 M httpd x86_64 2.2.23-3.el5 installed 3.4 M mailman x86_64 3:2.1.9-6.el5_6.1 installed 34 M mod-spdy-beta x86_64 0.9.3.3-386 installed 2.4 M mod_perl x86_64 2.0.4-6.el5 installed 6.8 M mod_python x86_64 3.2.8-3.1 installed 1.2 M mod_ssl x86_64 1:2.2.23-3.el5 installed 179 k perl-Apache-ASP x86_64 2.59-0.93298 installed 543 k php53 x86_64 5.3.3-13.el5_8 installed 3.4 M php53-sqlite2 x86_64 5.3.2-11041315 installed 366 k plesk-core x86_64 11.0.9-cos5.build110120608.16 installed 79 M plesk-l10n noarch 11.0.9-cos5.build110120827.16 installed 21 M pp-sitebuilder noarch 11.0.10-38572.12072100 installed 181 M psa x86_64 11.0.9-cos5.build110120608.16 installed 473 k psa-awstats-configurator noarch 11.0.9-cos5.build110120606.19 installed 0.0 psa-backup-manager x86_64 11.0.9-cos5.build110120608.16 installed 8.6 M psa-backup-manager-vz x86_64 11.0.0-cos5.build110120123.10 installed 1.6 k psa-fileserver x86_64 11.0.9-cos5.build110120608.16 installed 364 k psa-firewall x86_64 11.0.9-cos5.build110120608.16 installed 550 k psa-health-monitor noarch 11.0.9-cos5.build110120606.19 installed 2.3 k psa-horde noarch 3.3.13-cos5.build110120606.19 installed 20 M psa-hotfix1-9.3.0 x86_64 9.3.0-cos5.build93100518.16 installed 23 k psa-imp noarch 4.3.11-cos5.build110120606.19 installed 12 M psa-ingo noarch 1.2.6-cos5.build110120606.19 installed 5.1 M psa-kronolith noarch 2.3.6-cos5.build110120606.19 installed 6.3 M psa-libxml-proxy x86_64 2.7.8-0.301910 installed 1.2 M psa-mailman-configurator x86_64 11.0.9-cos5.build110120608.16 installed 5.5 k psa-migration-agents x86_64 11.0.9-cos5.build110120608.16 installed 169 k psa-migration-manager x86_64 11.0.9-cos5.build110120608.16 installed 1.1 M psa-mimp noarch 1.1.4-cos5.build110120418.19 installed 2.9 M psa-miva x86_64 1:5.06-cos5.build1013111101.14 installed 4.5 M psa-mnemo noarch 2.2.5-cos5.build110120606.19 installed 4.1 M psa-mod-fcgid-configurator x86_64 2.0.0-cos5.build1013111101.14 installed 0.0 psa-mod_aclr2 x86_64 12021319-9e86c2f installed 8.1 k psa-mod_fcgid x86_64 2.3.6-12050315 installed 222 k psa-mod_rpaf x86_64 0.6-12021310 installed 7.7 k psa-passwd noarch 3.1.3-cos5.build1013111101.14 installed 3.7 M psa-php53-configurator x86_64 1.6.2-cos5.build110120608.16 installed 6.4 k psa-rubyrails-configurator x86_64 1.1.6-cos5.build1013111101.14 installed 0.0 psa-spamassassin x86_64 11.0.9-cos5.build110120608.16 installed 167 k psa-turba noarch 2.3.6-cos5.build110120606.19 installed 6.1 M psa-updates noarch 11.0.9-cos5.build110120704.10 installed 0.0 psa-vhost noarch 11.0.9-cos5.build110120606.19 installed 160 k psa-vpn x86_64 11.0.9-cos5.build110120608.16 installed 1.9 M psa-watchdog x86_64 11.0.9-cos5.build110120608.16 installed 2.9 M webalizer x86_64 2.01_10-30.1 installed 259 k Transaction Summary ========================================================================================================================================= Remove 48 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) What should I do?

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • Linux Kernel crash mutex_lock_slowpath "blocked for more than 120 seconds". What to do?

    - by Roddick
    I have out-of-the box Debian Lenny with non-custom kernel 2.6.26-2-amd64. Brand new server that is used to 5% of it's potential, CPU and Disk-wise. Meaning it probably not crashing because of overload. every few days it freezes with hundreds of these messages in console log: : [284847.828428] INFO: task apache2:12473 blocked for more than 120 seconds. : [284847.868468] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.912759] apache2 D ffff8101bc6b7ab0 0 12473 14358 : [284847.912763] ffff810160d5bc50 0000000000000082 ffff8101c0002e40 0000000000000000 : [284847.912766] ffff8101a7c42950 ffff810327d92810 ffff8101a7c42bd8 0000000400000044 : [284847.912770] ffff8101c0002e40 00000000000612d0 0000000000000000 00000040000612d0 : [284847.912773] Call Trace: : [284847.912786] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.912790] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.912794] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.912800] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.912805] [<ffffffff80295844>] kmem_getpages+0x96/0x15f : [284847.912808] [<ffffffff80295fb7>] ____cache_alloc_node+0x6d/0x106 : [284847.912814] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.912819] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.912822] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.912827] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.912831] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.912840] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.912848] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f Almost all traces has __mutex_lock_slowpath as top-level. Only some has different trace: : [284847.737386] INFO: task apache2:12472 blocked for more than 120 seconds. : [284847.777551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.824881] apache2 D ffff8101bc6b7ab0 0 12472 14358 : [284847.824886] ffff8101b9cc1c50 0000000000000086 ffffffffa0131e0a 0000000000000002 : [284847.824889] ffff8102e7454300 ffff810324c6cad0 ffff8102e7454588 0000000000000000 : [284847.824893] 0000000000000001 0000000000000296 0000000000000003 ffff8101b9cc1c58 : [284847.824896] Call Trace: : [284847.828403] [<ffffffffa0131e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 : [284847.828412] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.828418] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.828421] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.828427] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.828428] [<ffffffff80271296>] find_lock_page+0x1f/0x8a : [284847.828428] [<ffffffff80273182>] filemap_fault+0x1c2/0x33c : [284847.828428] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.828428] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.828428] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.828428] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.828428] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.828428] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.828428] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f kernel: [1912668.466347] INFO: task apache2:17984 blocked for more than 120 seconds. [1912668.507035] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [1912668.555165] apache2 D ffff8101c5637ba0 0 17984 17282 : [1912668.596752] ffff810166a7dd30 0000000000000086 0000000000000000 ffff810166a7dcd8 : [1912668.643341] ffff8101c563c880 ffff81024505f000 0000000000000002 ffff810166a7dd68 : [1912668.699566] 0000000000000086 00000000000cb1a0 0000000000000000 ffff81017f344d60 : [1912668.744773] Call Trace: : [1912668.761754] [<ffffffff8022a3ed>] pick_next_task_fair+0x6e/0x7a : [1912668.829311] [<ffffffff802be0e2>] bio_alloc_bioset+0x89/0xd9 : [1912668.861930] [<ffffffff8024ac3a>] getnstimeofday+0x39/0x98 : [1912668.897005] [<ffffffff802710f6>] sync_page+0x0/0x41 : [1912668.927868] [<ffffffff80429487>] io_schedule+0x5c/0x9e : [1912668.960286] [<ffffffff80271132>] sync_page+0x3c/0x41 : [1912668.991756] [<ffffffff804295fa>] __wait_on_bit_lock+0x36/0x66 : [1912669.031757] [<ffffffff802710e3>] __lock_page+0x5e/0x64 : [1912669.064191] [<ffffffff802461d3>] wake_bit_function+0x0/0x23 : [1912669.100100] [<ffffffff80281bc5>] handle_mm_fault+0x5e4/0x8de : [1912669.134531] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.174623] [<ffffffff802aa108>] fcntl_setlk+0x1cf/0x291 : [1912669.210623] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.246923] [<ffffffff802a677f>] sys_fcntl+0x280/0x2f7 After googling for "mutex_lock_slowpath" I can only find the Kernel mailing list discussions that this issue was introduced in some commit. Wthout reference to verison. Discussions as recent as Jan 25, 2011. The Kernel I am using is form Debian Lenny, year ago. What should I do? Is this bug even fixed in kernel? if it's such obvious bug why it happens so rarely? Should I download latest kernel from kernel.org and upgrade? Should I use Debian backports to install new "Approved" kernel? Am I missing something? What to do?

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • Microsoft Enterprise Library Caching Application Block not thread safe?!

    - by AlanR
    Good aftenoon, I created a super simple console app to test out the Enterprise Library Caching Application Block, and the behavior is blaffling. I'm hoping I screwed something that's easy to fix in the setup. Have each item expire after 5 seconds for testing purposes. Basic setup -- "every second pick a number between 0 and 2. if the cache doesn't already have it, put it in there -- otherwise just grab it from the cache. Do this inside a LOCK statement to ensure thread safety. APP.CONFIG: <configuration> <configSections> <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <cachingConfiguration defaultCacheManager="Cache Manager"> <cacheManagers> <add expirationPollFrequencyInSeconds="1" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="Null Storage" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Cache Manager" /> </cacheManagers> <backingStores> <add encryptionProviderName="" type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Null Storage" /> </backingStores> </cachingConfiguration> </configuration> C#: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Practices.EnterpriseLibrary.Common; using Microsoft.Practices.EnterpriseLibrary.Caching; using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations; namespace ConsoleApplication1 { class Program { public static ICacheManager cache = CacheFactory.GetCacheManager("Cache Manager"); static void Main(string[] args) { while (true) { System.Threading.Thread.Sleep(1000); // sleep for one second. var key = new Random().Next(3).ToString(); string value; lock (cache) { if (!cache.Contains(key)) { cache.Add(key, key, CacheItemPriority.Normal, null, new SlidingTime(TimeSpan.FromSeconds(5))); } value = (string)cache.GetData(key); } Console.WriteLine("{0} --> '{1}'", key, value); //if (null == value) throw new Exception(); } } } } OUPUT -- How can I prevent the cache to returning nulls? 2 --> '2' 1 --> '1' 2 --> '2' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 0 --> '0' 1 --> '1' 2 --> '' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 2 --> '2' 1 --> '1' Press any key to continue . . . Thanks in advance, -Alan.

    Read the article

  • Windows Server 2008 R2 + IIS 7.5 + ASP.NET 4.0 = HTTP Error 500.0

    - by Dave
    I am having an impossible time getting asp.net 4.0 to work in any fashion at all. In fact, I completely wiped my server, reinstalled with Server 2008 R2 Standard (running on a VMWare ESXi box, not that it should matter), and cannot even get a test .aspx page to work. Here is exactly what I did: Installed 2008 R2 Standard Activated windows and enabled Remote Desktop Installed the Web Server Role with the necessary role services(common http, asp.net, logging, tracing, management service and FTP) Enabled the management service Installed .Net Framework 4.0 via web executable Added FTP publishing to the default web site Switched default web site application pool to asp.net 4.0 (integrated) Added a 'test.aspx' file to the inetpub\wwwroot folder (contents below) Opened a browser to http://localhost/test.aspx and received a 500.0 error (also below) What am I missing? I haven't touched IIS in a while (3+ years), so it could be something stupid/trvial. Please point it out, call me a noob; my ego can take it. Thanks, Dave test.aspx <% @Page language="C# %> <html> <head> <title>Test.aspx</title> </head> <body> <asp:label runat="server" text="This is an asp.net 4.0 label" /> </body> </html> Error page: Module AspNetInitClrHostFailureModule Notification BeginRequest Handler PageHandlerFactory-Integrated-4.0 Error Code 0x80070002 Requested URL http://localhost:80/test.aspx Physical Path C:\inetpub\wwwroot\test.aspx Logon Method Not yet determined Logon User Not yet determined Trace: And in my trace file I get: 96. view trace Warning -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. 97. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) The application error log shows: Log Name: Application Source: Microsoft-Windows-IIS-W3SVC-WP Date: 5/28/2010 2:08:10 PM Event ID: 2299 Task Category: None Level: Error Keywords: Classic User: N/A Computer: win-ltfkdo1dnfp Description: An application has reported as being unhealthy. The worker process will now request a recycle. Reason given: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. The data is the error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC-WP" Guid="{670080D9-742A-4187-8D16-41143D1290BD}" EventSourceName="W3SVC-WP" /> <EventID Qualifiers="49152">2299</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-28T21:08:10.000000000Z" /> <EventRecordID>1663</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>Application</Channel> <Computer>win-ltfkdo1dnfp</Computer> <Security /> </System> <EventData> <Data Name="Reason">An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. </Data> <Binary>02000780</Binary> </EventData> </Event>

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user298814
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • PHP/Java bridge problem

    - by Jack
    I am using tomcat 6 on windows. Here is the code I am testing. import java.io.ByteArrayOutputStream; import java.io.Closeable; import java.io.StringReader; import javax.script.Invocable; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; /** * Create and run THREAD_COUNT PHP threads, concurrently accessing a * shared resource. * * Create 5 script engines, passing each a shared resource allocated * from Java. Each script engine has to implement Runnable. * * Java accesses the Runnable script engine using * scriptEngine.getInterface() and calls thread.start() to invoke each * PHP Runnable implementations concurrently. */ class PhpThreads { public static final String runnable = new String("<?php\n" + "function run() {\n" + " $out = java_context()->getAttribute('sharedResource', 100);\n" + " $nr = (string)java_context()->getAttribute('nr', 100);\n" + " echo \"started thread: $nr\n\";\n" + " for($i=0; $i<100; $i++) {\n" + " $out->write(ord($nr));\n" + " java('java.lang.Thread')->sleep(1);\n" + " }\n" + "}\n" + "?>\n"); static final int THREAD_COUNT = 5; public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); Thread threads[] = new Thread[THREAD_COUNT]; ScriptEngine engines[] = new ScriptEngine[THREAD_COUNT]; ByteArrayOutputStream sharedResource = new ByteArrayOutputStream(); StringReader runnableReader = new StringReader(runnable); // create THREAD_COUNT PHP threads for (int i=0; i<THREAD_COUNT; i++) { engines[i] = manager.getEngineByName("php-invocable"); if (engines[i] == null) throw new NullPointerException ("php script engine not found"); engines[i].put("nr", new Integer(i+1)); engines[i].put("sharedResource", sharedResource); engines[i].eval(runnableReader); runnableReader.reset(); // cast the whole script to Runnable; note also getInterface(specificClosure, type) Runnable r = (Runnable) ((Invocable)engines[i]).getInterface(Runnable.class); threads[i] = new Thread(r); } // run the THREAD_COUNT PHP threads for (int i=0; i<THREAD_COUNT; i++) { threads[i].start(); } // wait for the THREAD_COUNT PHP threads to finish for (int i=0; i<THREAD_COUNT; i++) { threads[i].join(); ((Closeable)engines[i]).close(); } // print the output generated by the THREAD_COUNT concurrent threads String result = sharedResource.toString(); System.out.println(result); // Check result Object res=manager.getEngineByName("php").eval( "<?php " + "exit((int)('10011002100310041005'!=" + "@system(\"echo -n "+result+"|sed 's/./&\\\n/g'|sort|uniq -c|tr -d ' \\\n'\")));" + "?>"); System.exit(((Number)res).intValue()); } } I have added all the libraries. When I run the file I get the following error - run: Exception in thread "main" javax.script.ScriptException: java.io.IOException: Cannot run program "php-cgi": CreateProcess error=2, The system cannot find the file specified at php.java.script.InvocablePhpScriptEngine.eval(InvocablePhpScriptEngine.java:209) at php.java.script.SimplePhpScriptEngine.eval(SimplePhpScriptEngine.java:178) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:232) at PhpThreads.main(NewClass.java:53) Caused by: java.io.IOException: Cannot run program "php-cgi": CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at java.lang.Runtime.exec(Runtime.java:593) at php.java.bridge.Util$Process.start(Util.java:1064) at php.java.bridge.Util$ProcessWithErrorHandler.start(Util.java:1166) at php.java.bridge.Util$ProcessWithErrorHandler.start(Util.java:1217) at php.java.script.CGIRunner.doRun(CGIRunner.java:126) at php.java.script.HttpProxy.doRun(HttpProxy.java:63) at php.java.script.CGIRunner.run(CGIRunner.java:111) at php.java.bridge.ThreadPool$Delegate.run(ThreadPool.java:60) Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(ProcessImpl.java:81) at java.lang.ProcessImpl.start(ProcessImpl.java:30) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 8 more What am I missing?

    Read the article

  • How can I successfully dechiper Instruments Messages for iPhone Leak

    - by dubbeat
    Hi, I have a memory leak in my app. (This is the first of many I'm sure :() I've being trying to use Instruments to find it. Instruments gives me a lot of information but I think I must just not know how to use this information. What I did so far was 1) Run the app with Instruments 2) Memory Leak Occurs named general -stack 16 3) Find general - stack 16 in the object allocations part of instruments 4) The information here says the event type is a malloc, that webcore is responsible and the something named WKSetCurrentGraphicContext is the responsible caller. How can I use this given information to discover where in my code the leak is being caused? If I comment out the following function I don't get the leak warning so I guess it should be in there somewhere but I can't see where -(void)constructFeatured { NSString *imageName =[[NSString alloc] initWithFormat:@"%@%@%@",@"http://myweb/avatar_", featuredValueObject.featured_promo_artistid, @".jpg"]; NSURL *url = [NSURL URLWithString:imageName]; CGRect frame; frame.size.width=100; frame.size.height=100; frame.origin.x=20; frame.origin.y=39; [imageName release]; imageName=nil; SDWebImageManager *manager = [SDWebImageManager sharedManager]; UIImage *cachedImage = [manager imageWithURL:url]; if (cachedImage) { cachedImage =[ImageManipulator makeRoundCornerImage:cachedImage : 10 : 10]; UIImageView *avatarimageview = [[UIImageView alloc]initWithImage:cachedImage ]; avatarimageview.frame=frame; [self.view addSubview:avatarimageview]; UIView *spinny = [self.view viewWithTag:SPINNY_TAG]; [spinny removeFromSuperview]; [avatarimageview release]; } else { [manager downloadWithURL:url delegate:self]; } NSURL *url2 =[NSURL URLWithString:[NSString stringWithFormat:@"%@%@%@",@"http://myweb/", featuredValueObject.featured_promo_artistcountry , @".png"]]; CGRect flagframe; flagframe.size.width=16; flagframe.size.height=11; flagframe.origin.x=130; flagframe.origin.y=40; NSData* data = [[NSData alloc] initWithContentsOfURL:url2]; UIImage* img = [[UIImage alloc] initWithData:data]; UIImageView *imageflagview = [[UIImageView alloc] initWithImage: img]; imageflagview.frame=flagframe; [self.view addSubview:imageflagview]; [imageflagview release]; imageflagview=nil; [data release]; [img release]; [url release]; artistname =[[UILabel alloc]initWithFrame:CGRectMake(130,75, 200, 15)]; [artistname setFont:[UIFont fontWithName:@"Arial" size:(16.0)]]; artistname.backgroundColor= [UIColor clearColor]; artistname.textColor=[UIColor whiteColor]; artistname.text=featuredValueObject.featured_promo_artistname; [self.view addSubview:artistname]; [artistname release]; hasConstructedFeatured=YES; [featuredValueObject release]; featuredValueObject=nil; }

    Read the article

  • Why do I get the error "Only antlib URIs can be located from the URI alone,not the URI" when trying to run hibernate tools in my build.xml

    - by Casbah
    I'm trying to run hibernate tools in an ant build to generate ddl from my JPA annotations. Ant dies on the taskdef tag. I've tried with ant 1.7, 1.6.5, and 1.6 to no avail. I've tried both in eclipse and outside. I've tried including all the hbn jars in the hibernate-tools path and not. Note that I based my build file on this post: http://stackoverflow.com/questions/281890/hibernate-jpa-to-ddl-command-line-tools I'm running eclipse 3.4 with WTP 3.0.1 and MyEclipse 7.1 on Ubuntu 8. Build.xml: <project name="generateddl" default="generate-ddl"> <path id="hibernate-tools"> <pathelement location="../libraries/hibernate-tools/hibernate-tools.jar" /> <pathelement location="../libraries/hibernate-tools/bsh-2.0b1.jar" /> <pathelement location="../libraries/hibernate-tools/freemarker.jar" /> <pathelement location="../libraries/jtds/jtds-1.2.2.jar" /> <pathelement location="../libraries/hibernate-tools/jtidy-r8-20060801.jar" /> </path> <taskdef classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="hibernate-tools"/> <target name="generate-ddl" description="Export schema to DDL file"> <!-- compile model classes before running hibernatetool --> <!-- task definition; project.class.path contains all necessary libs <taskdef name="hibernatetool" classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="project.class.path" /> --> <hibernatetool destdir="sql"> <!-- check that directory exists --> <jpaconfiguration persistenceunit="default" /> <classpath> <dirset dir="WebRoot/WEB-INF/classes"> <include name="**/*"/> </dirset> </classpath> <hbm2ddl outputfilename="schemaexport.sql" format="true" export="false" drop="true" /> </hibernatetool> </target> Error message (ant -v): Apache Ant version 1.7.0 compiled on December 13 2006 Buildfile: /home/joe/workspace/bento/ant-generate-ddl.xml parsing buildfile /home/joe/workspace/bento/ant-generate-ddl.xml with URI = file:/home/joe/workspace/bento/ant-generate-ddl.xml Project base dir set to: /home/joe/workspace/bento [antlib:org.apache.tools.ant] Could not load definitions from resource org/apache/tools/ant/antlib.xml. It could not be found. BUILD FAILED /home/joe/workspace/bento/ant-generate-ddl.xml:12: Only antlib URIs can be located from the URI alone,not the URI at org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:216) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:140) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.parseBuildFile(InternalAntRunner.java:191) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:400) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) Total time: 195 milliseconds

    Read the article

  • JAXB + JAK java.lang.ClassCastException

    - by Ivansek
    Hi, I read page about JAK implementation where is a piece of code from pom.xml file (Listing 1). This piece of code is actually commented in pom.xml file so i uncommented it in order to add my own schema to compile. After that i run command mvn -e clean install, and i get this error: ivansek ~/Sites/Xlab/KMLTest/javaapiforkml-read-only $ mvn -e clean install + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building a Java API for Kml [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: default-clean}] [INFO] [antrun:run {execution: xjc-invocation}] [INFO] Executing tasks [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:131) at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:98) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:116) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:118) ... 20 more Caused by: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at java.util.ServiceLoader.fail(ServiceLoader.java:207) at java.util.ServiceLoader.access$100(ServiceLoader.java:164) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:353) at java.util.ServiceLoader$1.next(ServiceLoader.java:421) at com.sun.tools.xjc.Options.findServices(Options.java:910) at com.sun.tools.xjc.Options.getAllPlugins(Options.java:351) at com.sun.tools.xjc.Options.parseArgument(Options.java:650) at com.sun.tools.xjc.Options.parseArguments(Options.java:760) at com.sun.tools.xjc.XJC2Task._doXJC(XJC2Task.java:453) at com.sun.tools.xjc.XJC2Task.doXJC(XJC2Task.java:443) at com.sun.tools.xjc.XJC2Task.execute(XJC2Task.java:369) at com.sun.istack.tools.ProtectedTask.execute(ProtectedTask.java:55) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) ... 23 more Caused by: java.lang.ClassCastException at java.lang.Class.cast(Class.java:2990) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:345) ... 38 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Thu May 13 09:53:19 CEST 2010 [INFO] Final Memory: 16M/79M [INFO] ------------------------------------------------------------------------ Any suggestions?

    Read the article

  • Safe, standard way to load images in ListView on a different thread?

    - by Po
    Before making this question, I have searched and read these ones: http://stackoverflow.com/questions/541966/android-how-do-i-do-a-lazy-load-of-images-in-listview http://stackoverflow.com/questions/1409623/android-issue-with-lazy-loading-images-into-a-listview My problem is I have a ListView, where: Each row contains an ImageView, whose content is to be loaded from the internet Each row's view is recycled as in ApiDemo's List14 What I want ultimately: Load images lazily, only when the user scrolls to them Load images on different thread(s) to maintain responsiveness My current approach: In the adapter's getView() method, apart from setting up other child views, I launch a new thread that loads the Bitmap from the internet. When that loading thread finishes, it returns the Bitmap to be set on the ImageView (I do this using AsyncTask or Handler). Because I recycle ImageViews, it may be the case that I first want to set a view with Bitmap#1, then later want to set it to Bitmap#2 when the user scrolls down. Bitmap#1 may happen to take longer than Bitmap#2 to load, so it may end up overwriting Bitmap#2 on the view. I solve this by maintaining a WeakHashMap that remembers the last Bitmap I want to set for that view. Below is somewhat a pseudocode for my current approach. I've ommitted other details like caching, just to keep the thing clear. public class ImageLoader { // keeps track of the last Bitmap we want to set for this ImageView private static final WeakHashMap<ImageView, AsyncTask> assignments = new WeakHashMap<ImageView, AsyncTask>(); /** Asynchronously sets an ImageView to some Bitmap loaded from the internet */ public static void setImageAsync(final ImageView imageView, final String imageUrl) { // cancel whatever previous task AsyncTask oldTask = assignments.get(imageView); if (oldTask != null) { oldTask.cancel(true); } // prepare to launch a new task to load this new image AsyncTask<String, Integer, Bitmap> newTask = new AsyncTask<String, Integer, Bitmap>() { protected void onPreExecute() { // set ImageView to some "loading..." image } protected Bitmap doInBackground(String... urls) { return loadFromInternet(imageUrl); } protected void onPostExecute(Bitmap bitmap) { // set Bitmap if successfully loaded, or an "error" image if (bitmap != null) { imageView.setImageBitmap(bitmap); } else { imageView.setImageResource(R.drawable.error); } } }; newTask.execute(); // mark this as the latest Bitmap we want to set for this ImageView assignments.put(imageView, newTask); } /** returns (Bitmap on success | null on error) */ private Bitmap loadFromInternet(String imageUrl) {} } Problem I still have: what if the Activity gets destroyed while some images are still loading? Is there any risk when the loading thread calls back to the ImageView later, when the Activity is already destroyed? Moreover, AsyncTask has some global thread-pool underneath, so if lengthy tasks are not canceled when they're not needed anymore, I may end up wasting time loading things users don't see. My current design of keeping this thing globally is too ugly, and may eventually cause some leaks that are beyond my understanding. Instead of making ImageLoader a singleton like this, I'm thinking of actually creating separate ImageLoader objects for different Activities, then when an Activity gets destroyed, all its AsyncTask will be canceled. Is this too awkward? Anyway, I wonder if there is a safe and standard way of doing this in Android. In addition, I don't know iPhone but is there a similar problem there and do they have a standard way to do this kind of task? Many thanks.

    Read the article

  • PHP framework question

    - by iconiK
    I'm currently working on a browser-based MMO and have chosen the LAMP stack because of the extremely low cost to start with in production (versus Windows + IIS + ASP.NET/C# + SQL Server, even though I have MSDN Universal). However I will need a PHP framework for this as it's no easy task. I am not restricted by anything other than the ability to run on Linux, as I will use a dedicated cloud hosting solution (and a VMWare image for development) and can configure it as needed. In no specific order: It has to be easily scalable; this is crucial. If the game becomes a steady success it will eventually outgrow the server beyond what the host provides and would have to be moved to several load-balanced servers. It is crucial that this can be done with minimum effort. I do know this might require following strict conventions, so if you know of any for your suggested framework please explain what would be needed. It has to provide modules for all the core tasks: authentication, ACL, database access, MVC, and so on. One or two missing modules are fine, as long as they can easily be written and integrated. It should support internationalization. I think there is no excuse for any web framework not to provide means of translating the application and switching between languages without a lot of effort from the programmer. Must have very good community support and preferably commercial support as well. Yes, I do know QCodo/QCubed is so nice, but it is not mature enough for this task. Smooth AJAX support is required. Whether the framework comes with AJAX-capable widgets or has an easy way of adding AJAX is not relevant, as long as AJAX is easily doable. I plan to use jQuery + Dojo or one of them alone - not exactly sure. Auto-magically doing stuff when it improves readability and relieves a lot of effort would be especially nice if it is generally reliable and does not interfere with other requirements. This seems to be the case of CakePHP. I have read a lot of comparisons and I know it's a really hot debate. The general answer is "try and see for yourself what suits you". However, I can't say it is easy for this task and I'm calling for your experience with building applications with similar requirements. So far I'm tied up between Zend and CakePHP by the general criteria, however, all well-known frameworks offer the same functionality in some way or another with different approaches each with it's own advantages and disadvantages. Edits: I am kinda new to MVC, however, I am willing to learn it and I don't care if a framework is easier for those new to MVC. I have lots of time to learn MVC and any other architectures (or whatever they're called) you recommend. I will use Zend as a utility "framework", even though it's just a collection of libraries (some good ones though, as I have been told). Current PHP contenders are: CakePHP, Kohana, Zend alone.

    Read the article

  • Synchronizing issue: I want the main thread to be run before another thread but it sometimes doesn´t

    - by Rox
    I have done my own small concurrency framework (just for learning purposes) inspired by the java.util.concurrency package. This is about the Callable/Future mechanism. My code below is the whole one and is compilable and very easy to understand. My problem is that sometimes I run into a deadlock where the first thread (the main thread) awaits for a signal from the other thread. But then the other thread has already notified the main thread before the main thread went into waiting state, so the main thread cannot wake up. FutureTask.get() should always be run before FutureTask.run() but sometimes the run() method (which is called by new thread) runs before the get() method (which is called by main thread). I don´t know how I can prevent that. This is a pseudo code of how I want the two threads to be run. //From main thread: Executor.submit().get() (in get() the main thread waits for new thread to notify) ->submit() calls Executor.execute(FutureTask object) -> execute() starts new thread -> new thread shall notify `main thread` I cannot understand how the new thread can start up and run faster than the main thread that actually starts the new thread. Main.java: public class Main { public static void main(String[] args) { new ExecutorServiceExample(); } public Main() { ThreadExecutor executor = new ThreadExecutor(); Integer i = executor.submit(new Callable<Integer>() { @Override public Integer call() { return 10; } }).get(); System.err.println("Value: "+i); } } ThreadExecutor.java: public class ThreadExecutor { public ThreadExecutor() {} protected <V> RunnableFuture<V> newTaskFor(Callable c) { return new FutureTask<V>(c); } public <V> Future<V> submit(Callable<V> task) { if (task == null) throw new NullPointerException(); RunnableFuture<V> ftask = newTaskFor(task); execute(ftask); return ftask; } public void execute(Runnable r) { new Thread(r).start(); } } FutureTask.java: import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.ReentrantLock; import java.util.logging.Level; import java.util.logging.Logger; public class FutureTask<V> implements RunnableFuture<V> { private Callable<V> callable; private volatile V result; private ReentrantLock lock = new ReentrantLock(); private Condition condition = lock.newCondition(); public FutureTask(Callable callable) { if (callable == null) throw new NullPointerException(); this.callable = callable; } @Override public void run() { acquireLock(); System.err.println("RUN"+Thread.currentThread().getName()); V v = this.callable.call(); set(v); condition.signal(); releaseLock(); } @Override public V get() { acquireLock(); System.err.println("GET "+Thread.currentThread().getName()); try { condition.await(); } catch (InterruptedException ex) { Logger.getLogger(FutureTask.class.getName()).log(Level.SEVERE, null, ex); } releaseLock(); return this.result; } public void set(V v) { this.result = v; } private void acquireLock() { lock.lock(); } private void releaseLock() { lock.unlock(); } } And the interfaces: public interface RunnableFuture<V> extends Runnable, Future<V> { @Override void run(); } public interface Future<V> { V get(); } public interface Callable<V> { V call(); }

    Read the article

  • How to configure Spring Security PasswordComparisonAuthenticator

    - by denlab
    I can bind to an embedded ldap server on my local machine with the following bean: <b:bean id="secondLdapProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider"> <b:constructor-arg> <b:bean class="org.springframework.security.ldap.authentication.BindAuthenticator"> <b:constructor-arg ref="contextSource" /> <b:property name="userSearch"> <b:bean id="userSearch" class="org.springframework.security.ldap.search.FilterBasedLdapUserSearch"> <b:constructor-arg index="0" value="ou=people"/> <b:constructor-arg index="1" value="(uid={0})"/> <b:constructor-arg index="2" ref="contextSource" /> </b:bean> </b:property> </b:bean> </b:constructor-arg> <b:constructor-arg> <b:bean class="com.company.security.ldap.BookinLdapAuthoritiesPopulator"> </b:bean> </b:constructor-arg> </b:bean> however, when I try to authenticate with a PasswordComparisonAuthenticator it repeatedly fails on a bad credentials event: <b:bean id="ldapAuthProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider"> <b:constructor-arg> <b:bean class="org.springframework.security.ldap.authentication.PasswordComparisonAuthenticator"> <b:constructor-arg ref="contextSource" /> <b:property name="userDnPatterns"> <b:list> <b:value>uid={0},ou=people</b:value> </b:list> </b:property> </b:bean> </b:constructor-arg> <b:constructor-arg> <b:bean class="com.company.security.ldap.BookinLdapAuthoritiesPopulator"> </b:bean> </b:constructor-arg> </b:bean> Through debugging, I can see that the authenticate method picks up the DN from the ldif file, but then tries to compare the passwords, however, it's using the LdapShaPasswordEncoder (the default one) where the password is stored in plaintext in the file, and this is where the authentication fails. Here's the authentication manager bean referencing the preferred authentication bean: <authentication-manager> <authentication-provider ref="ldapAuthProvider"/> <authentication-provider user-service-ref="userDetailsService"> <password-encoder hash="md5" base64="true"> <salt-source system-wide="secret"/> </password-encoder> </authentication-provider> </authentication-manager> On a side note, whether I set the password-encoder on ldapAuthProvider to plaintext or just leave it blank, doesn't seem to make a difference. Any help would be greatly appreciated. Thanks

    Read the article

  • Tomcat stops responding to JK requests

    - by Bruno Reis
    Hello. I have a nasty issue with load-balanced Tomcat servers that are hanging up. Any help would be greatly appreciated. The system I'm running Tomcat 6.0.26 on HotSpot Server 14.3-b01 (Java 1.6.0_17-b04) on three servers sitting behind another server that acts as load balancer. The load balancer runs Apache (2.2.8-1) + MOD_JK (1.2.25). All of the servers are running Ubuntu 8.04. The Tomcat's have 2 connectors configured: an AJP one, and a HTTP one. The AJP is to be used with the load balancer, while the HTTP is used by the dev team to directly connect to a chosen server (if we have a reason to do so). I have Lambda Probe 1.7b installed on the Tomcat servers to help me diagnose and fix the problem soon to be described. The problem Here's the problem: after about 1 day the application servers are up, JK Status Manager starts reporting status ERR for, say, Tomcat2. It will simply get stuck on this state, and the only fix I've found so far is to ssh the box and restart Tomcat. I must also mention that JK Status Manager takes a lot longer to refresh when there's a Tomcat server in this state. Finally, the "Busy" count of the stuck Tomcat on JK Status Manager is always high, and won't go down per se -- I must restart the Tomcat server, wait, then reset the worker on JK. Analysis Since I have 2 connectors on each Tomcat (AJP and HTTP), I still can connect to the application through the HTTP one. The application works just fine like this, very, very fast. That is perfectly normal, since I'm the only one using this server (as JK stopped delegating requests to this Tomcat). To try to better understand the problem, I've taken a thread dump from a Tomcat which is not responding anymore, and from another one that has been restarted recently (say, 1 hour before). The instance that is responding normally to JK shows most of the TP-ProcessorXXX threads in "Runnable" state, with the following stack trace: java.net.SocketInputStream.socketRead0 ( native code ) java.net.SocketInputStream.read ( SocketInputStream.java:129 ) java.io.BufferedInputStream.fill ( BufferedInputStream.java:218 ) java.io.BufferedInputStream.read1 ( BufferedInputStream.java:258 ) java.io.BufferedInputStream.read ( BufferedInputStream.java:317 ) org.apache.jk.common.ChannelSocket.read ( ChannelSocket.java:621 ) org.apache.jk.common.ChannelSocket.receive ( ChannelSocket.java:559 ) org.apache.jk.common.ChannelSocket.processConnection ( ChannelSocket.java:686 ) org.apache.jk.common.ChannelSocket$SocketConnection.runIt ( ChannelSocket.java:891 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:690 ) java.lang.Thread.run ( Thread.java:619 ) The instance that is stuck show most (all?) of the TP-ProcessorXXX threads in "Waiting" state. These have the following stack trace: java.lang.Object.wait ( native code ) java.lang.Object.wait ( Object.java:485 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:662 ) java.lang.Thread.run ( Thread.java:619 ) I don't know of the internals of Tomcat, but I would infer that the "Waiting" threads are simply threads sitting on a thread pool. So, if they are threads waiting inside of a thread pool, why wouldn't Tomcat put them to work on processing requests from JK? Solution? So, as I've stated before, the only fix I've found is to stop the Tomcat instance, stop the JK worker, wait the latter's busy count slowly go down, start Tomcat again, and enable the JK worker once again. What is causing this problem? How should I further investigate it? What can I do to solve it? Thanks in advance.

    Read the article

  • How to sanely configure security policy in Tomcat 6

    - by Chas Emerick
    I'm using Tomcat 6.0.24, as packaged for Ubuntu Karmic. The default security policy of Ubuntu's Tomcat package is pretty stringent, but appears straightforward. In /var/lib/tomcat6/conf/policy.d, there are a variety of files that establish default policy. Worth noting at the start: I've not changed the stock tomcat install at all -- no new jars into its common lib directory(ies), no server.xml changes, etc. Putting the .war file in the webapps directory is the only deployment action. the web application I'm deploying fails with thousands of access denials under this default policy (as reported to the log thanks to the -Djava.security.debug="access,stack,failure" system property). turning off the security manager entirely results in no errors whatsoever, and proper app functionality What I'd like to do is add an application-specific security policy file to the policy.d directory, which seems to be the recommended practice. I added this to policy.d/100myapp.policy (as a starting point -- I would like to eventually trim back the granted permissions to only what the app actually needs): grant codeBase "file:${catalina.base}/webapps/ROOT.war" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/lib/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/classes/-" { permission java.security.AllPermission; }; Note the thrashing around attempting to find the right codeBase declaration. I think that's likely my fundamental problem. Anyway, the above (really only the first two grants appear to have any effect) almost works: the thousands of access denials are gone, and I'm left with just one. Relevant stack trace: java.security.AccessControlException: access denied (java.io.FilePermission /var/lib/tomcat6/webapps/ROOT/WEB-INF/classes/com/foo/some-file-here.txt read) java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) java.security.AccessController.checkPermission(AccessController.java:546) java.lang.SecurityManager.checkPermission(SecurityManager.java:532) java.lang.SecurityManager.checkRead(SecurityManager.java:871) java.io.File.exists(File.java:731) org.apache.naming.resources.FileDirContext.file(FileDirContext.java:785) org.apache.naming.resources.FileDirContext.lookup(FileDirContext.java:206) org.apache.naming.resources.ProxyDirContext.lookup(ProxyDirContext.java:299) org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:1937) org.apache.catalina.loader.WebappClassLoader.findResource(WebappClassLoader.java:973) org.apache.catalina.loader.WebappClassLoader.getResource(WebappClassLoader.java:1108) java.lang.ClassLoader.getResource(ClassLoader.java:973) I'm pretty convinced that the actual file that's triggering the denial is irrelevant -- it's just some properties file that we check for optional configuration parameters. What's interesting is that: it doesn't exist in this context the fact that the file doesn't exist ends up throwing a security exception, rather than java.io.File.exists() simply returning false (although I suppose that's just a matter of the semantics of the read permission). Another workaround (besides just disabling the security manager in tomcat) is to add an open-ended permission to my policy file: grant { permission java.security.AllPermission; }; I presume this is functionally equivalent to turning off the security manager. I suppose I must be getting the codeBase declaration in my grants subtly wrong, but I'm not seeing it at the moment.

    Read the article

  • How can I get my Web API app to run again after upgrading to MVC 5 and Web API 2?

    - by Clay Shannon
    I upgraded my Web API app to the funkelnagelneu versions using this guidance: http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2 However, after going through the steps (it seems all this should be automated, anyway), I tried to run it and got, "A project with an Output Type of Class Library cannot be started directly" What in Sam Hills Brothers Coffee is going on here? Who said this was a class library? So I opened Project Properties, and changed it (it was marked as "Class Library" for some reason - it either wasn't yesterday, or was and worked fine) to an Output Type of "Windows Application" ("Console Application" and "Class Library" being the only other options). Now it won't compile, complaining: "*Program 'c:\Platypus_Server_WebAPI\PlatypusServerWebAPI\PlatypusServerWebAPI\obj\Debug\PlatypusServerWebAPI.exe' does not contain a static 'Main' method suitable for an entry point...*" How can I get my Web API app back up and running in view of this quandary? UPDATE Looking in packages.config, two entries seem chin-scratch-worthy: <package id="Microsoft.AspNet.Providers" version="1.2" targetFramework="net40" /> <package id="Microsoft.Web.Infrastructure" version="1.0.0.0" targetFramework="net40" /> All the others target net451. Could this be the problem? Should I remove these packages? UPDATE 2 I tried to uninstall the Microsoft.Web.Infrastructure package (its description leads me to believe I don't need it; also, it has no dependencies) via the NuGet package manager, but it tells me, "NuGet failed to install or uninstall the selected package in the following project(s). [mine]" UPDATE 3 I went through the steps in again, and found that I had missed one step. I had to change this entry in the Application web.config File : <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-5.0.0.0" newVersion="5.0.0.0" /> </dependentAssembly> (from "4.0.0.0" to "5.0.0.0") ...but I still get the same result - it rebuilds/compiles, but won't run, with "A project with an Output Type of Class Library cannot be started directly" UPDATE 4 Thinking about the err msg, that it can't directly open a class library, I thought, "Sure you can't/won't -- this is a web app, not a project. So I followed a hunch, closed the project, and reopened it as a website (instead of reopening a project). That has gotten me further, at least; now I see a YSOD: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. UPDATE 5 Note: The project is now (after being opened as a web site) named "localhost_48614" And...there is no longer a "References" folder...?!?!? As to that YSOD I'm getting, the official instructions (http://www.asp.net/mvc/tutorials/mvc-5/how-to-upgrade-an-aspnet-mvc-4-and-web-api-project-to-aspnet-mvc-5-and-web-api-2) said to do this, and I quote: "Update all elements that contain “System.Web.WebPages.Razor” from version “2.0.0.0” to version“3.0.0.0”." UPDATE 6 When I select Tools Library Package Manager Manage NuGet Packages for Solution now, I get, "Operation failed. Unable to locate the solution directory. Please ensure that the solution has been saved." So I save it, and it saves it with this funky new name (C:\Users\clay\Documents\Visual Studio 2013\Projects\localhost_48614\localhost_48614.sln) I get the Yellow Strip of Enlightenment across the top of the NuGet Package Manager telling me, "Some NuGet packages are missing from this solution. Click to restore from your online package sources." I do (click the "Restore" button, that is), and it downloads the missing packages ... I end up with the 30 packages. I try to run the app/site again, and ... the erstwhile YSOD becomes a compilation error: The pre-application start initialization method Start on type System.Web.Mvc.PreApplicationStartCode threw an exception with the following error message: Could not load file or assembly 'System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.. Argghhhh!!! (and it's not even talk-like-a-pirate day).

    Read the article

  • Tomcat/Hibernate Problem "SEVERE: Error listenerStart"

    - by JSteve
    I downloaded working example of hibernate (with maven) and installed it on my tomcat, it worked. Then I created a new web project in MyEclipse, added hibernate support and moved all source files (no jar) to this new project and fixed package/paths wherever was necessary. My servlets are responding correctly but when I add "Listener" in web.xml, tomcat returns error "Error ListenerStart" on startup and my application doesn't start. I've carefully checked all packages, paths and classes, they look good. Error message is also not telling anything more except these two words Here is complete tomcat startup log: 17-Jun-2010 12:13:37 PM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8810 17-Jun-2010 12:13:37 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 293 ms 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart 17-Jun-2010 12:13:37 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/addressbook] startup failed due to previous errors 17-Jun-2010 12:13:37 PM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8810 17-Jun-2010 12:13:37 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 17-Jun-2010 12:13:37 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/22 config=null 17-Jun-2010 12:13:37 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 446 ms My web.xml is: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <listener> <listener-class>addressbook.util.SessionFactoryInitializer</listener-class> </listener> <filter> <filter-name>Session Interceptor</filter-name> <filter-class>addressbook.util.SessionInterceptor</filter-class> </filter> <filter-mapping> <filter-name>Session Interceptor</filter-name> <servlet-name>Country Manager</servlet-name> </filter-mapping> <servlet> <servlet-name>Country Manager</servlet-name> <servlet-class>addressbook.managers.CountryManagerServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>Country Manager</servlet-name> <url-pattern>/countrymanager</url-pattern> </servlet-mapping> </web-app> Can somebody either help me figure out what I am doing wrong? or point to some resource where I may get some precise solution of my problem?

    Read the article

  • Lock-Free, Wait-Free and Wait-freedom algorithms for non-blocking multi-thread synchronization.

    - by GJ
    In multi thread programming we can find different terms for data transfer synchronization between two or more threads/tasks. When exactly we can say that some algorithem is: 1)Lock-Free 2)Wait-Free 3)Wait-Freedom I understand what means Lock-free but when we can say that some synchronization algorithm is Wait-Free or Wait-Freedom? I have made some code (ring buffer) for multi-thread synchronization and it use Lock-Free methods but: 1) Algorithm predicts maximum execution time of this routine. 2) Therad which call this routine at beginning set unique reference, what mean that is inside of this routine. 3) Other threads which are calling the same routine check this reference and if is set than count the CPU tick count (measure time) of first involved thread. If that time is to long interrupt the current work of involved thread and overrides him job. 4) Thread which not finished job because was interrupted from task scheduler (is reposed) at the end check the reference if not belongs to him repeat the job again. So this algorithm is not really Lock-free but there is no memory lock in use, and other involved threads can wait (or not) certain time before overide the job of reposed thread. Added RingBuffer.InsertLeft function: function TgjRingBuffer.InsertLeft(const link: pointer): integer; var AtStartReference: cardinal; CPUTimeStamp : int64; CurrentLeft : pointer; CurrentReference: cardinal; NewLeft : PReferencedPtr; Reference : cardinal; label TryAgain; begin Reference := GetThreadId + 1; //Reference.bit0 := 1 with rbRingBuffer^ do begin TryAgain: //Set Left.Reference with respect to all other cores :) CPUTimeStamp := GetCPUTimeStamp + LoopTicks; AtStartReference := Left.Reference OR 1; //Reference.bit0 := 1 repeat CurrentReference := Left.Reference; until (CurrentReference AND 1 = 0)or (GetCPUTimeStamp - CPUTimeStamp > 0); //No threads present in ring buffer or current thread timeout if ((CurrentReference AND 1 <> 0) and (AtStartReference <> CurrentReference)) or not CAS32(CurrentReference, Reference, Left.Reference) then goto TryAgain; //Calculate RingBuffer NewLeft address CurrentLeft := Left.Link; NewLeft := pointer(cardinal(CurrentLeft) - SizeOf(TReferencedPtr)); if cardinal(NewLeft) < cardinal(@Buffer) then NewLeft := EndBuffer; //Calcolate distance result := integer(Right.Link) - Integer(NewLeft); //Check buffer full if result = 0 then //Clear Reference if task still own reference if CAS32(Reference, 0, Left.Reference) then Exit else goto TryAgain; //Set NewLeft.Reference NewLeft^.Reference := Reference; SFence; //Try to set link and try to exchange NewLeft and clear Reference if task own reference if (Reference <> Left.Reference) or not CAS64(NewLeft^.Link, Reference, link, Reference, NewLeft^) or not CAS64(CurrentLeft, Reference, NewLeft, 0, Left) then goto TryAgain; //Calcolate result if result < 0 then result := Length - integer(cardinal(not Result) div SizeOf(TReferencedPtr)) else result := cardinal(result) div SizeOf(TReferencedPtr); end; //with end; { TgjRingBuffer.InsertLeft } RingBuffer unit you can find here: RingBuffer, CAS functions: FockFreePrimitives, and test program: RingBufferFlowTest Thanks in advance, GJ

    Read the article

  • [Closed] Oracle JDBC connection with Weblogic 10 datasource mapping, giving problem java.sql.SQLExce

    - by gauravkarnatak
    Oracle JDBC connection with Weblogic 10 datasource mapping, giving problem java.sql.SQLException: Closed Connection I am using weblogic 10 JNDI datasource to create JDBC connections, below is my config <?xml version="1.0" encoding="UTF-8"?> <jdbc-data-source xmlns="http://www.bea.com/ns/weblogic/90" xmlns:sec="http://www.bea.com/ns/weblogic/90/security" xmlns:wls="http://www.bea.com/ns/weblogic/90/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/920 http://www.bea.com/ns/weblogic/920.xsd"> <name>XL-Reference-DS</name> <jdbc-driver-params> <url>jdbc:oracle:oci:@abc.XL.COM</url> <driver-name>oracle.jdbc.driver.OracleDriver</driver-name> <properties> <property> <name>user</name> <value>DEV_260908</value> </property> <property> <name>password</name> <value>password</value> </property> <property> <name>dll</name> <value>ocijdbc10</value> </property> <property> <name>protocol</name> <value>oci</value> </property> <property> <name>oracle.jdbc.V8Compatible</name> <value>true</value> </property> <property> <name>baseDriverClass</name> <value>oracle.jdbc.driver.OracleDriver</value> </property> </properties> </jdbc-driver-params> <jdbc-connection-pool-params> <initial-capacity>1</initial-capacity> <max-capacity>100</max-capacity> <capacity-increment>1</capacity-increment> <test-connections-on-reserve>true</test-connections-on-reserve> <test-table-name>SQL SELECT 1 FROM DUAL</test-table-name> </jdbc-connection-pool-params> <jdbc-data-source-params> <jndi-name>ReferenceData</jndi-name> <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol> </jdbc-data-source-params> </jdbc-data-source> When I run a bulk task where there are lots of connections made and closed, sometimes it gives connection closed exception for any of the task in the bulk task. Below is detailed exception' java.sql.SQLException: Closed Connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:111) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:145) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:207) at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:3512) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3265) at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3367) Any ideas?

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • bidirectional OneToOne Mapping : From Entity to subclass and from superclass to Entity?

    - by Teocali
    I'm trying to establish a tricky bidirectional OneToOne mapping in hibernate. I got the following classes : @Entity @Inheritance(strategy = InheritanceType.JOINED) public class Parent { @OneToOne private AnotherEntity anotherEntity; } @Entity public class Child1 extends Parent{} @Entity public class Child2 extends Parent{} @Entity public class AnotherEntity { @OneToOne(mappedBy = "anotherEntity") private Child1 child1; @OneToOne(mappedBy = "anotherEntity") private Child1 child2; } The problem here is when I'm launching the application : I got the following message : org.hibernate.MappingException: property [anotherEntity] not found on entity [Child1] at org.hibernate.mapping.PersistentClass.getRecursiveProperty(PersistentClass.java:429) at org.hibernate.mapping.PersistentClass.getReferencedProperty(PersistentClass.java:369) at org.hibernate.cfg.Configuration.originalSecondPassCompile(Configuration.java:1614) at org.hibernate.cfg.Configuration.secondPassCompile(Configuration.java:1362) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1727) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1778) at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:189) at org.springframework.orm.hibernate4.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:350) at org.springframework.orm.hibernate4.LocalSessionFactoryBean.afterPropertiesSet(LocalSessionFactoryBean.java:335) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:567) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:631) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:588) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:645) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:508) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:449) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:133) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1266) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1185) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1080) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5015) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5302) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:962) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:536) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1471) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:301) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at org.apache.catalina.manager.ManagerServlet.check(ManagerServlet.java:1436) at org.apache.catalina.manager.ManagerServlet.deploy(ManagerServlet.java:673) at org.apache.catalina.manager.ManagerServlet.doPut(ManagerServlet.java:431) at javax.servlet.http.HttpServlet.service(HttpServlet.java:644) at javax.servlet.http.HttpServlet.service(HttpServlet.java:722) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:108) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:225) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:581) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) One obvious solution would be to move the anotherEntity field to Child1 and Child2, but it would mean lose the link from Parent to AnotherEntity. Any help welcome.

    Read the article

  • Passing a list of files to javac

    - by Robert Menteer
    How can I get the javac task to use an existing fileset? In my build.xml I have created several filesets to be used in multiple places throughout build file. Here is how they have been defined: <fileset dir = "${src}" id = "java.source.all"> <include name = "**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.examples"> <include name = "**/Examples/**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.tests"> <include name = "**/Tests/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.project"> <include name = "**/*.java" /> <exclude name = "**/Examples/**/*.java" /> <exclude name = "**/Tests/**/*.java" /> </fileset> I have also used macrodef to compile the java files so the javac task does not need to be repeated multiple times. The macro looks like this: <macrodef name="compile"> <attribute name="sourceref"/> <sequential> <javac srcdir = "${src}" destdir = "${build}" classpathref = "classpath" includeantruntime = "no" debug = "${debug}"> <filelist dir="." files="@{sourceref}" /> <-- email is about this </javac> </sequential> What I'm trying to do is compile only the classes that are needed for specific targets not all the targets in the source tree. And do so without having to specify the files every time. Here are how the targets are defined: <target name = "compile-examples" depends = "init"> <compile sourceref = "${toString:java.source.examples}" /> </target> <target name = "compile-project" depends = "init"> <compile sourceref = "${toString:java.source.project}" /> </target> <target name = "compile-tests" depends = "init"> <compile sourceref = "${toString:java.source.tests}" /> </target> As you can see each target specifies the java files to be compiled as a simi-colon separated list of absolute file names. The only problem with this is that javac does not support filelist. It also does not support fileset, path or pathset. I've tried using but it treats the list as a single file name. Another thing I tried is sending the reference directly (not using toString) and using but include does not have a ref attribute. SO THE QUESTION IS: How do you get the javac task to use a reference to a fileset that was defined in another part of the build file? I'm not interested in solutions that cause me to have multiple javac tasks. Completely re-writting the macro is acceptable. Changes to the targets are also acceptable provided redundant code between targets is kept to a minimum. p.s. Another problem is that fileset wants a comma separated list. I've only done a brief search for a way to convert semi-colons to commas and haven't found a way to do that. p.p.s. Sorry for the yelling but some people are too quick to post responses that don't address the subject.

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >