Search Results

Search found 11531 results on 462 pages for 'cpu cache'.

Page 132/462 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • How do I find the cause for a huge difference in performance between two identical Ubuntu servers?

    - by the.duckman
    I am running two Dell R410 servers in the same rack of a data center. Both have the same hardware configuration, run Ubuntu 10.4, have the same packages installed and run the same Java web servers. No other load. One of them is 20-30% faster than the other, very consistently. I used dstat to figure out, if there are more context switches, IO, swapping or anything, but I see no reason for the difference. With the same workload, (no swapping, virtually no IO), the cpu usage and load is higher on one server. So the difference appears to be mainly CPU bound, but while a simple cpu benchmark using sysbench (with all other load turned off) did yield a difference, it was only 6%. So maybe it is not only CPU but also memory performance. I tried to figure out if the BIOS settings differ in some parameter, did a dump using dmidecode, but that yielded no difference. I compared /proc/cpuinfo, no difference. I compared the output of cpufreq-info, no difference. I am lost. What can I do, to figure out, what is going on?

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • PC doesn't POST with a certain model of PSU

    - by Core Xii
    I have this PC with an Asus P5N32-E SLI motherboard and an Intel Q6600 CPU. I got a new RX-5300 PSU, but it didn't work. The motherboard power LED is on fine When I switch power on, the PC powers up briefly (0.5-1 sec) then shuts down From there on out when I switch power on, it stays on, all fans and components seem to be receiving power, but the motherboard won't POST. No video output, no PC speaker beeps, nothing If I turn the hard switch on the PSU off and then back on again, go to step 2; The PC turns on and then off immediately again, and on subsequent power-ups it stays on but won't POST I disconnected every component but the motherboard, CPU and PSU. Still nothing. I tried three other models of PSU on this PC, both of higher (600) and lower (<300) wattage than the 530 on the RX-5300 and they all work fine. At this point I was convinced the PSU was faulty so I returned it. When the replacement arrived, it behaved exactly the same. So two different units of RX-5300 both with the same symptoms, neither working with this motherboard + CPU. Yet, three other models of PSU work perfectly fine. The PC store couldn't reproduce my problem with the returned PSU. I tried resetting the CMOS with the jumper. I tried with both the 4 and 4+4 (with and without the extra +4 connected) CPU connectors (curiously the RX-5300 comes equipped with both). Could it be a statistical probability that I get two units of the same model of PSU that are faulty in the exact same manner? Could the RX-5300 model itself be somehow incompatible with this motherboard? I was under the impression that PSUs were pretty much universal so long as you have the wattage. Could the motherboard be broken in some such a way as to work with certain PSUs but not others? What's going on here?

    Read the article

  • stopping fastcgi_cache for php-enabled custom error page

    - by Ian
    Since enabling the fastcgi_cache on my nginx server, my php-enabled custom error page has suddenly stopped working and I'm getting the internal 404 message instead. In nginx.conf: fastcgi_cache_path /var/lib/nginx/fastcgicache levels=1:2 keys_zone=MYCACHE:5m inactive=2h max_size=1g loader_files=1000 loader_threshold=2000; map $http_cookie $no_cache { default 0; ~SESS 1; } fastcgi_cache_key "$scheme$request_method$host$request_uri"; add_header X-My-Cache $upstream_cache_status; map $uri $no_cache_dirs { default 0; ~^/(?:phpMyAdmin|rather|poll|webmail|skewed|blogs|galleries|pixcache) 1; } the cache relevant stuff in my fastcgi.conf: fastcgi_cache MYCACHE; fastcgi_keep_conn on; fastcgi_cache_bypass $no_cache $no_cache_dirs; fastcgi_no_cache $no_cache $no_cache_dirs; fastcgi_cache_valid 200 301 5m; fastcgi_cache_valid 302 5m; fastcgi_cache_valid 404 1m; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires; expires epoch; fastcgi_cache_lock on; If I disable the fastcgi_cache, the php-enabled 404 page works as it has for years. How would I disable the cache for the custom error page?

    Read the article

  • Installed Memory (RAM): 8GB (4GB Useable)

    - by Mike Bannister
    I have 4X 2GB RAM sticks installed, Windows 7 Ultimate x64 shows "Installed Memory (RAM): 8GB (4GB Useable)" in the system information. The motherboard says it supports up to 16GB of memory. I had Windows XP x64 and I'm pretty sure it was able to use all 8Gigs of memory. I've just formatted and am not burning through 4GB for sure. *The BIOS only show 4096MB while booting. I was only able to find one option relating to memory mapping and it took it from 4Gigs to 3 and change. CPU-Z shows all 4 sticks as 2048MB I went in to msconfig and checked the limits, they are not set. I tried resetting the CPU and RAM sticks 3 times, the CPU has no pins to be bent (flat contacts, I5) I rearranged all the RAM. I have separate video memory from the system memory. I have a 64 bit OS (I am 100% certain) Hardware: Memory: Four Corsair XMS3 TW3X4G1333C9AG Motherboard: Asus P7P55D Pro CPU: Intel Core i5-760 BX80605I5760 Graphics: Two ZOTAC ZT-50401-10L GeForce GTX 550 Ti If anyone has new ideas or knows the exact setting in the bios, it would be greatly appreciated.

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • vmware server 64 bit on ubuntu 9.10 64 bit with P2V windows 2003 SBS poor network speed

    - by RobertHC
    configuration is ubuntu 2.6.31-21 64 bit vmware 2.0.2 64 bit last release hardware is core 2 quad with 8GB ram guest is win 2003 server SBS 32 bit Dear friends, we have a converted physical to virtual windows sbs 2003, converted with last converter available nowadays http://www.vmware.com/products/converter/ vCenter converter. Running the P2V 2K3 SBS on vmware server, it does boot fine, but we do note an abnormal CPU activity and a poor lan speed. As attempts we did what follow. We removed all unneeded peripherals, we removed one NIC (phisycal server was 2 nics), we changed the vmx to ged the nic recognized as intel instead than amd, we removed 1 cpu (physical was 2 cpu), we removed anything was reported as failed driver from system events monitor. Nothing to do, no way and funny results. Let's read some tests results. All are made with the same file copied in different source folders. Copying from client side (both directions copy, to/from server) results are i.e. 10 seconds, copying the same files from server side (again from and to server) results are different... from client to server, speed is round about (bit more) 10 seconds, but from server to client direction is slower: double the time. Beeing very fast and launching a simultaneous copy "from server to client"+"from client to server", this made from the server side, results in a stuck traffic... 45 seconds to do the copy. vmware tools are installed and e1000 driver has been updated. With one processor CPU activity is still going up and down but much less than with two. Because of test, we installed win 2k8 STD 64 bit. We repeated all the above tests with exactly the same file result is just one: always 5 seconds (this matches the lan speed) Any idea about this issue is welcome and thank you if any. Kind regards R.

    Read the article

  • lxc containers hangs after upgrade to 13.10

    - by doug123
    I have 3 lxc containers. They were all working fine on 12.10 and I upgraded the containers with do-release-upgrade on the containers to 13.04 and 13.10 and that worked great. Then I upgraded the host to 13.04 and then 13.10 and now the 3 containers hang with this: >lxc-start -n as1 -l DEBUG -o $(tty) lxc-start 1383145786.513 INFO lxc_start_ui - using rcfile /var/lib/lxc/as1/config lxc-start 1383145786.513 WARN lxc_log - lxc_log_init called with log already initialized lxc-start 1383145786.513 INFO lxc_apparmor - aa_enabled set to 1 lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/2' (5/6) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/13' (7/8) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/14' (9/10) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/15' (11/12) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/17' (13/14) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/18' (15/16) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/19' (17/18) lxc-start 1383145786.514 DEBUG lxc_conf - allocated pty '/dev/pts/20' (19/20) lxc-start 1383145786.514 INFO lxc_conf - tty's configured lxc-start 1383145786.514 DEBUG lxc_start - sigchild handler set lxc-start 1383145786.514 DEBUG lxc_console - opening /dev/tty for console peer lxc-start 1383145786.514 DEBUG lxc_console - using '/dev/tty' as console lxc-start 1383145786.514 DEBUG lxc_console - 6242 got SIGWINCH fd 25 lxc-start 1383145786.514 DEBUG lxc_console - set winsz dstfd:22 cols:177 rows:53 lxc-start 1383145786.514 INFO lxc_start - 'as1' is initialized lxc-start 1383145786.522 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp lxc-start 1383145786.524 DEBUG lxc_conf - mac address of host interface 'vethB4L35W' changed to private fe:7c:96:a0:ae:29 lxc-start 1383145786.525 DEBUG lxc_conf - instanciated veth 'vethB4L35W/vethVC61K2', index is '26' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.529 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.529 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.555 DEBUG lxc_conf - move 'eth0' to '6249' lxc-start 1383145786.555 INFO lxc_conf - 'as1' hostname has been setup lxc-start 1383145786.575 DEBUG lxc_conf - 'eth0' has been setup lxc-start 1383145786.575 INFO lxc_conf - network has been setup lxc-start 1383145786.575 INFO lxc_conf - looking at .44 42 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /. lxc-start 1383145786.575 INFO lxc_conf - looking at .52 44 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.575 INFO lxc_conf - looking at .61 52 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.575 INFO lxc_conf - looking at .68 44 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run. lxc-start 1383145786.575 INFO lxc_conf - looking at .69 68 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.575 INFO lxc_conf - looking at .72 68 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.575 INFO lxc_conf - looking at .73 68 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.575 INFO lxc_conf - looking at .76 44 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.575 INFO lxc_conf - looking at .77 76 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.575 INFO lxc_conf - looking at .78 77 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.575 INFO lxc_conf - looking at .79 77 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.575 INFO lxc_conf - looking at .80 77 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.575 INFO lxc_conf - looking at .81 77 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.575 INFO lxc_conf - looking at .82 77 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.575 INFO lxc_conf - looking at .83 77 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.575 INFO lxc_conf - looking at .84 77 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.575 INFO lxc_conf - looking at .85 77 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.575 INFO lxc_conf - looking at .94 77 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.575 INFO lxc_conf - looking at .95 77 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.575 INFO lxc_conf - looking at .96 76 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.575 INFO lxc_conf - looking at .98 76 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.575 INFO lxc_conf - looking at .101 76 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.575 INFO lxc_conf - looking at .102 76 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.575 INFO lxc_conf - looking at .103 44 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.575 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.575 INFO lxc_conf - looking at .104 44 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.575 INFO lxc_conf - now p is . /data. lxc-start 1383145786.575 INFO lxc_conf - looking at .105 44 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.575 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.576 DEBUG lxc_conf - mounted '/data/srv/lxc/as1' on '/usr/lib/x86_64-linux-gnu/lxc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//dev/pts', type 'devpts' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//proc', type 'proc' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//sys', type 'sysfs' lxc-start 1383145786.576 DEBUG lxc_conf - mounted 'none' on '/usr/lib/x86_64-linux-gnu/lxc//run', type 'tmpfs' lxc-start 1383145786.576 INFO lxc_conf - mount points have been setup lxc-start 1383145786.577 INFO lxc_conf - console has been setup lxc-start 1383145786.577 INFO lxc_conf - 8 tty(s) has been setup lxc-start 1383145786.577 INFO lxc_conf - rootfs path is ./data/srv/lxc/as1., mount is ./usr/lib/x86_64-linux-gnu/lxc. lxc-start 1383145786.577 INFO lxc_apparmor - I am 1, /proc/self points to 1 lxc-start 1383145786.577 DEBUG lxc_conf - created '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' directory lxc-start 1383145786.577 DEBUG lxc_conf - mountpoint for old rootfs is '/usr/lib/x86_64-linux-gnu/lxc/lxc_putold' lxc-start 1383145786.577 DEBUG lxc_conf - pivot_root syscall to '/usr/lib/x86_64-linux-gnu/lxc' successful lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev/pts' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/lock' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/shm' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run/user' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuset' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpu' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuacct' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/memory' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/devices' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/freezer' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/blkio' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/perf_event' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/hugetlb' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup/systemd' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/fuse/connections' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/debug' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/kernel/security' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/pstore' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/proc' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/data' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/boot' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/dev' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/run' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys/fs/cgroup' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold/sys' lxc-start 1383145786.577 DEBUG lxc_conf - umounted '/lxc_putold' lxc-start 1383145786.577 INFO lxc_conf - created new pts instance lxc-start 1383145786.578 DEBUG lxc_conf - drop capability 'sys_boot' (22) lxc-start 1383145786.578 DEBUG lxc_conf - capabilities have been setup lxc-start 1383145786.578 NOTICE lxc_conf - 'as1' is setup. lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'memory.limit_in_bytes' set to '20G' lxc-start 1383145786.578 DEBUG lxc_cgroup - cgroup 'cpuset.cpus' set to '12-23' lxc-start 1383145786.578 INFO lxc_cgroup - cgroup has been setup lxc-start 1383145786.578 INFO lxc_apparmor - setting up apparmor lxc-start 1383145786.578 INFO lxc_apparmor - changed apparmor profile to lxc-container-default lxc-start 1383145786.578 NOTICE lxc_start - exec'ing '/sbin/init' lxc-start 1383145786.578 INFO lxc_conf - looking at .15 20 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys. lxc-start 1383145786.578 INFO lxc_conf - looking at .16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /proc. lxc-start 1383145786.578 INFO lxc_conf - looking at .17 20 0:5 / /dev rw,relatime - devtmpfs udev rw,size=32961632k,nr_inodes=8240408,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev. lxc-start 1383145786.578 INFO lxc_conf - looking at .18 17 0:11 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,mode=600,ptmxmode=000 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /dev/pts. lxc-start 1383145786.578 INFO lxc_conf - looking at .19 20 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=6594456k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /run. lxc-start 1383145786.578 INFO lxc_conf - looking at .20 1 252:0 / / rw,relatime - ext4 /dev/mapper/limitorderbook1-root rw,errors=remount-ro,data=ordered . lxc-start 1383145786.578 INFO lxc_conf - now p is . /. lxc-start 1383145786.578 INFO lxc_conf - looking at .22 15 0:16 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755 . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/cgroup. lxc-start 1383145786.578 INFO lxc_conf - looking at .23 15 0:17 / /sys/fs/fuse/connections rw,relatime - fusectl none rw . lxc-start 1383145786.578 INFO lxc_conf - now p is . /sys/fs/fuse/connections. lxc-start 1383145786.578 INFO lxc_conf - looking at .24 15 0:6 / /sys/kernel/debug rw,relatime - debugfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/debug. lxc-start 1383145786.579 INFO lxc_conf - looking at .25 15 0:10 / /sys/kernel/security rw,relatime - securityfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/kernel/security. lxc-start 1383145786.579 INFO lxc_conf - looking at .26 19 0:18 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/lock. lxc-start 1383145786.579 INFO lxc_conf - looking at .27 19 0:19 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/shm. lxc-start 1383145786.579 INFO lxc_conf - looking at .28 22 0:20 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset,clone_children . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuset. lxc-start 1383145786.579 INFO lxc_conf - looking at .29 19 0:21 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755 . lxc-start 1383145786.579 INFO lxc_conf - now p is . /run/user. lxc-start 1383145786.579 INFO lxc_conf - looking at .30 15 0:22 / /sys/fs/pstore rw,relatime - pstore none rw . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/pstore. lxc-start 1383145786.579 INFO lxc_conf - looking at .31 22 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpu. lxc-start 1383145786.579 INFO lxc_conf - looking at .32 22 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/cpuacct. lxc-start 1383145786.579 INFO lxc_conf - looking at .33 22 0:25 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/memory. lxc-start 1383145786.579 INFO lxc_conf - looking at .34 22 0:26 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/devices. lxc-start 1383145786.579 INFO lxc_conf - looking at .35 22 0:27 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/freezer. lxc-start 1383145786.579 INFO lxc_conf - looking at .36 22 0:28 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/blkio. lxc-start 1383145786.579 INFO lxc_conf - looking at .37 22 0:29 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/perf_event. lxc-start 1383145786.579 INFO lxc_conf - looking at .38 22 0:30 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/hugetlb. lxc-start 1383145786.579 INFO lxc_conf - looking at .39 20 9:2 / /data rw,relatime - ext4 /dev/md2 rw,errors=remount-ro,data=ordered . lxc-start 1383145786.579 INFO lxc_conf - now p is . /data. lxc-start 1383145786.579 INFO lxc_conf - looking at .40 20 8:1 / /boot rw,relatime - ext2 /dev/sda1 rw,errors=continue . lxc-start 1383145786.579 INFO lxc_conf - now p is . /boot. lxc-start 1383145786.579 INFO lxc_conf - looking at .41 22 0:31 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime - cgroup systemd rw,name=systemd . lxc-start 1383145786.579 INFO lxc_conf - now p is . /sys/fs/cgroup/systemd. lxc-start 1383145786.579 NOTICE lxc_start - '/sbin/init' started with pid '6249' lxc-start 1383145786.579 WARN lxc_start - invalid pid for SIGCHLD <4>init: ureadahead main process (7) terminated with status 5 <4>init: console-font main process (94) terminated with status 1 And it will just sit there like that for hours at least. The container becomes pingable but I can't ssh and if I try lxc-console -n as1 I get a blank screen. If I do lxc-stop -n as1 or ^C in the window where it has hung I get: ^CTERM environment variable not set. <4>init: plymouth-upstart-bridge main process (192) terminated with status 1 <4>init: hwclock-save main process (187) terminated with status 70 * Asking all remaining processes to terminate... ...done. * All processes ended within 1 seconds... ...done. * Deactivating swap... ...fail! mount: cannot mount block device /dev/md2 read-only * Will now restart But after 20 minutes it hasn't restarted. Any ideas why these containers are hanging?

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • Problem connecting to Hsqldb via Hibernate when running a Eclipse GWT project.

    - by Toby
    Hi, I'm trying to run a simple GWT project where I'm trying to do a simple persitence via hibernate to a HSQLDB database. The database I'm using I have been using for at least 2 years with several osgi applications without any problems. So all I done is reused the same configuration and added a simple object mapping file. The problem I have is that I get a socket creation error when ever I try to persist the object with in GWT jetty. I now the database is up and running, I can telnet to it, run OSGI projects that uses the same config with out problems. This is the stack I get when running 25 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - Hibernate 3.3.1.GA 30 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - hibernate.properties not found 34 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - Bytecode provider name : javassist 42 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling 162 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - configuring from resource: hibernate.cfg.xml 162 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - Configuration resource: hibernate.cfg.xml 268 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - Reading mappings from resource : hbm-mappings/project.hbm.xml 382 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.HbmBinder - Mapping class: se.kanit.projectmgr.db.ProjectDAO - T_PROJECT 419 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.Configuration - Configured SessionFactory: null 3534 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 3534 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 1 3534 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 3537 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: org.hsqldb.jdbcDriver at URL: jdbc:hsqldb:hsql://localhost:1476/dirtyharry 3537 [21704474@qtp-26509496-0] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=sa, password=****} 3594 [21704474@qtp-26509496-0] WARN org.hibernate.cfg.SettingsFactory - Could not obtain connection metadata java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:111) at org.hibernate.cfg.Configuration.buildSettings(Configuration.java:2101) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1325) at se.kanit.projectmgr.db.HibernateUtil.(HibernateUtil.java:24) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 3626 [21704474@qtp-26509496-0] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.HSQLDialect 3640 [21704474@qtp-26509496-0] INFO org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 3644 [21704474@qtp-26509496-0] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 3644 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 3644 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 3645 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: disabled 3645 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): disabled 3645 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 3646 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 3651 [21704474@qtp-26509496-0] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 3651 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 3652 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 3652 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 3652 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 3664 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 3665 [21704474@qtp-26509496-0] INFO org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge - Cache provider: org.hibernate.cache.NoCacheProvider 3666 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 3666 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 3678 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 3678 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 3678 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 3679 [21704474@qtp-26509496-0] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 3775 [21704474@qtp-26509496-0] INFO org.hibernate.impl.SessionFactoryImpl - building session factory 4155 [21704474@qtp-26509496-0] INFO org.hibernate.impl.SessionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured 4170 [21704474@qtp-26509496-0] INFO org.hibernate.tool.hbm2ddl.SchemaUpdate - Running hbm2ddl schema update 4170 [21704474@qtp-26509496-0] INFO org.hibernate.tool.hbm2ddl.SchemaUpdate - fetching database metadata 4171 [21704474@qtp-26509496-0] ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate - could not get database metadata java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.prepare(SuppliedConnectionProviderConnectionHelper.java:51) at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:168) at org.hibernate.impl.SessionFactoryImpl.(SessionFactoryImpl.java:346) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1327) at se.kanit.projectmgr.db.HibernateUtil.(HibernateUtil.java:24) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 4172 [21704474@qtp-26509496-0] ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate - could not complete schema update java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.prepare(SuppliedConnectionProviderConnectionHelper.java:51) at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:168) at org.hibernate.impl.SessionFactoryImpl.(SessionFactoryImpl.java:346) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1327) at se.kanit.projectmgr.db.HibernateUtil.(HibernateUtil.java:24) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 4293 [21704474@qtp-26509496-0] WARN org.hibernate.util.JDBCExceptionReporter - SQL Error: -80, SQLState: 08000 4293 [21704474@qtp-26509496-0] ERROR org.hibernate.util.JDBCExceptionReporter - socket creation error Cannot open connection org.hibernate.exception.JDBCConnectionException: Cannot open connection at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:97) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:52) at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:449) at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) at org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1353) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:342) at $Proxy7.beginTransaction(Unknown Source) at se.kanit.projectmgr.db.HibernateUtil.saveOrUpdate(HibernateUtil.java:115) at se.kanit.web.projectmgr.server.issues.IssuesService.addIssue(IssuesService.java:26) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.appengine.tools.development.agent.runtime.Runtime.invoke(Runtime.java:100) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:122) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:349) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:938) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Caused by: java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:154) at org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446) ... 49 more Any tips and ideas are greatly appreciated. Cheers. Toby.

    Read the article

  • Hibernate Query Exception

    - by dharga
    I've got a hibernate query I'm trying to get working but keep getting an exception with a not so helpful stack trace. I'm including the code, the stack trace, and hibernate chatter before the exception is thrown. If you need me to include the entity classes for MessageTarget and GrpExclusion let me know in comments and I'll add them. public List<MessageTarget> findMessageTargets(int age, String gender, String businessCode, String groupId, String systemCode) { Session session = getHibernateTemplate().getSessionFactory().openSession(); List<MessageTarget> results = new ArrayList<MessageTarget>(); try { String hSql = "from MessageTarget mt where " + "not exists (select GrpExclusion where grp_no = ?) and " + "(trgt_gndr_cd = 'A' or trgt_gndr_cd = ?) and " + "sys_src_cd = ? and " + "bampi_busn_sgmnt_cd = ? and " + "trgt_low_age <= ? and " + "trgt_high_age >= ? and " + "(effectiveDate is null or effectiveDate <= ?) and " + "(termDate is null or termDate >= ?)"; results = session.createQuery(hSql) .setParameter(0, groupId) .setParameter(1, gender) .setParameter(2, systemCode) .setParameter(3, businessCode) .setParameter(4, age) .setParameter(5, age) .setParameter(6, new Date()) .setParameter(7, new Date()) .list(); } catch (Exception e) { System.err.println(e.getMessage()); e.printStackTrace(); } finally { session.close(); } return results; } Here's the stacktrace. [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R java.lang.NullPointerException [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.util.SessionFactoryHelper.findSQLFunction(SessionFactoryHelper.java:365) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.tree.IdentNode.getDataType(IdentNode.java:289) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.tree.SelectClause.initializeExplicitSelectClause(SelectClause.java:165) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.HqlSqlWalker.useSelectClause(HqlSqlWalker.java:831) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.HqlSqlWalker.processQuery(HqlSqlWalker.java:619) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:672) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.collectionFunctionOrSubselect(HqlSqlBaseWalker.java:4465) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.comparisonExpr(HqlSqlBaseWalker.java:4165) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1864) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1839) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.logicalExpr(HqlSqlBaseWalker.java:1789) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.whereClause(HqlSqlBaseWalker.java:818) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:604) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:288) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:231) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) [5/6/10 15:05:21:041 EDT] 00000017 SystemErr R at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:94) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1651) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.bcbst.bamp.ws.dao.MessageTargetDAOImpl.findMessageTargets(MessageTargetDAOImpl.java:30) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.bcbst.bamp.ws.common.AlertReminder.findMessageTargets(AlertReminder.java:22) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at java.lang.reflect.Method.invoke(Method.java:599) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.dispatcher.JavaDispatcher.invokeTargetOperation(JavaDispatcher.java:81) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.dispatcher.JavaBeanDispatcher.invoke(JavaBeanDispatcher.java:98) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.EndpointController.invoke(EndpointController.java:109) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.jaxws.server.JAXWSMessageReceiver.receive(JAXWSMessageReceiver.java:159) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:188) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:275) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.websvcs.transport.http.WASAxis2Servlet.doPost(WASAxis2Servlet.java:1389) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at javax.servlet.http.HttpServlet.service(HttpServlet.java:738) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at javax.servlet.http.HttpServlet.service(HttpServlet.java:831) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1536) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:829) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:458) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:175) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3742) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:276) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:929) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.WSWebContainer.handleRequest(WSWebContainer.java:1583) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:178) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:455) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:384) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:272) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:775) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:905) [5/6/10 15:05:21:057 EDT] 00000017 SystemErr R at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1550) Here's the hibernate chatter. [5/6/10 15:05:20:651 EDT] 00000017 XmlBeanDefini I org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Loading XML bean definitions from class path resource [beans.xml] [5/6/10 15:05:20:823 EDT] 00000017 Configuration I org.slf4j.impl.JCLLoggerAdapter info configuring from url: file:/C:/workspaces/bampi/AlertReminderWS/WebContent/WEB-INF/classes/hibernate.cfg.xml [5/6/10 15:05:20:838 EDT] 00000017 Configuration I org.slf4j.impl.JCLLoggerAdapter info Configured SessionFactory: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:20:838 EDT] 00000017 AnnotationBin I org.hibernate.cfg.AnnotationBinder bindClass Binding entity from annotated class: com.bcbst.bamp.ws.model.MessageTarget [5/6/10 15:05:20:838 EDT] 00000017 EntityBinder I org.hibernate.cfg.annotations.EntityBinder bindTable Bind entity com.bcbst.bamp.ws.model.MessageTarget on table MessageTarget [5/6/10 15:05:20:854 EDT] 00000017 AnnotationBin I org.hibernate.cfg.AnnotationBinder bindClass Binding entity from annotated class: com.bcbst.bamp.ws.model.GrpExclusion [5/6/10 15:05:20:854 EDT] 00000017 EntityBinder I org.hibernate.cfg.annotations.EntityBinder bindTable Bind entity com.bcbst.bamp.ws.model.GrpExclusion on table GrpExclusion [5/6/10 15:05:20:854 EDT] 00000017 CollectionBin I org.hibernate.cfg.annotations.CollectionBinder bindOneToManySecondPass Mapping collection: com.bcbst.bamp.ws.model.MessageTarget.exclusions -> GrpExclusion [5/6/10 15:05:20:885 EDT] 00000017 AnnotationSes I org.springframework.orm.hibernate3.LocalSessionFactoryBean buildSessionFactory Building new Hibernate SessionFactory [5/6/10 15:05:20:901 EDT] 00000017 ConnectionPro I org.slf4j.impl.JCLLoggerAdapter info Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider [5/6/10 15:05:20:901 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info RDBMS: Microsoft SQL Server, version: 9.00.4035 [5/6/10 15:05:20:901 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JDBC driver: Microsoft SQL Server 2005 JDBC Driver, version: 1.2.2828.100 [5/6/10 15:05:20:901 EDT] 00000017 Dialect I org.slf4j.impl.JCLLoggerAdapter info Using dialect: org.hibernate.dialect.SQLServerDialect [5/6/10 15:05:20:916 EDT] 00000017 TransactionFa I org.slf4j.impl.JCLLoggerAdapter info Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory [5/6/10 15:05:20:916 EDT] 00000017 TransactionMa I org.slf4j.impl.JCLLoggerAdapter info No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Automatic flush during beforeCompletion(): disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Automatic session close at end of transaction: disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Scrollable result sets: enabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JDBC3 getGeneratedKeys(): enabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Connection release mode: auto [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Default batch fetch size: 1 [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Generate SQL with comments: disabled [5/6/10 15:05:20:916 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Order SQL updates by primary key: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Order SQL inserts for batching: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory [5/6/10 15:05:20:932 EDT] 00000017 ASTQueryTrans I org.slf4j.impl.JCLLoggerAdapter info Using ASTQueryTranslatorFactory [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query language substitutions: {} [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info JPA-QL strict compliance: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Second-level cache: enabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Query cache: disabled [5/6/10 15:05:20:932 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge [5/6/10 15:05:20:932 EDT] 00000017 RegionFactory I org.slf4j.impl.JCLLoggerAdapter info Cache provider: org.hibernate.cache.NoCacheProvider [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Optimize cache for minimal puts: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Structured second-level cache entries: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Statistics: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Deleted entity synthetic identifier rollback: disabled [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Default entity-mode: pojo [5/6/10 15:05:20:948 EDT] 00000017 SettingsFacto I org.slf4j.impl.JCLLoggerAdapter info Named query checking : enabled [5/6/10 15:05:20:979 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info building session factory [5/6/10 15:05:21:010 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info Factory name: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info JNDI InitialContext properties:{} [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info Creating subcontext: java:hibernate [5/6/10 15:05:21:010 EDT] 00000017 NamingHelper I org.slf4j.impl.JCLLoggerAdapter info Creating subcontext: Alert [5/6/10 15:05:21:010 EDT] 00000017 SessionFactor I org.slf4j.impl.JCLLoggerAdapter info Bound factory to JNDI name: java:hibernate/Alert/SessionFactory1.0.3 [5/6/10 15:05:21:026 EDT] 00000017 SessionFactor W org.slf4j.impl.JCLLoggerAdapter warn InitialContext did not implement EventContext [5/6/10 15:05:21:041 EDT] 00000017 PARSER E org.slf4j.impl.JCLLoggerAdapter error <AST>:0:0: unexpected end of subtree

    Read the article

  • 8 Things You Can Do In Android’s Developer Options

    - by Chris Hoffman
    The Developer Options menu in Android is a hidden menu with a variety of advanced options. These options are intended for developers, but many of them will be interesting to geeks. You’ll have to perform a secret handshake to enable the Developer Options menu in the Settings screen, as it’s hidden from Android users by default. Follow the simple steps to quickly enable Developer Options. Enable USB Debugging “USB debugging” sounds like an option only an Android developer would need, but it’s probably the most widely used hidden option in Android. USB debugging allows applications on your computer to interface with your Android phone over the USB connection. This is required for a variety of advanced tricks, including rooting an Android phone, unlocking it, installing a custom ROM, or even using a desktop program that captures screenshots of your Android device’s screen. You can also use ADB commands to push and pull files between your device and your computer or create and restore complete local backups of your Android device without rooting. USB debugging can be a security concern, as it gives computers you plug your device into access to your phone. You could plug your device into a malicious USB charging port, which would try to compromise you. That’s why Android forces you to agree to a prompt every time you plug your device into a new computer with USB debugging enabled. Set a Desktop Backup Password If you use the above ADB trick to create local backups of your Android device over USB, you can protect them with a password with the Set a desktop backup password option here. This password encrypts your backups to secure them, so you won’t be able to access them if you forget the password. Disable or Speed Up Animations When you move between apps and screens in Android, you’re spending some of that time looking at animations and waiting for them to go away. You can disable these animations entirely by changing the Window animation scale, Transition animation scale, and Animator duration scale options here. If you like animations but just wish they were faster, you can speed them up. On a fast phone or tablet, this can make switching between apps nearly instant. If you thought your Android phone was speedy before, just try disabling animations and you’ll be surprised how much faster it can seem. Force-Enable FXAA For OpenGL Games If you have a high-end phone or tablet with great graphics performance and you play 3D games on it, there’s a way to make those games look even better. Just go to the Developer Options screen and enable the Force 4x MSAA option. This will force Android to use 4x multisample anti-aliasing in OpenGL ES 2.0 games and other apps. This requires more graphics power and will probably drain your battery a bit faster, but it will improve image quality in some games. This is a bit like force-enabling antialiasing using the NVIDIA Control Panel on a Windows gaming PC. See How Bad Task Killers Are We’ve written before about how task killers are worse than useless on Android. If you use a task killer, you’re just slowing down your system by throwing out cached data and forcing Android to load apps from system storage whenever you open them again. Don’t believe us? Enable the Don’t keep activities option on the Developer options screen and Android will force-close every app you use as soon as you exit it. Enable this app and use your phone normally for a few minutes — you’ll see just how harmful throwing out all that cached data is and how much it will slow down your phone. Don’t actually use this option unless you want to see how bad it is! It will make your phone perform much more slowly — there’s a reason Google has hidden these options away from average users who might accidentally change them. Fake Your GPS Location The Allow mock locations option allows you to set fake GPS locations, tricking Android into thinking you’re at a location where you actually aren’t. Use this option along with an app like Fake GPS location and you can trick your Android device and the apps running on it into thinking you’re at locations where you actually aren’t. How would this be useful? Well, you could fake a GPS check-in at a location without actually going there or confuse your friends in a location-tracking app by seemingly teleporting around the world. Stay Awake While Charging You can use Android’s Daydream Mode to display certain apps while charging your device. If you want to force Android to display a standard Android app that hasn’t been designed for Daydream Mode, you can enable the Stay awake option here. Android will keep your device’s screen on while charging and won’t turn it off. It’s like Daydream Mode, but can support any app and allows users to interact with them. Show Always-On-Top CPU Usage You can view CPU usage data by toggling the Show CPU usage option to On. This information will appear on top of whatever app you’re using. If you’re a Linux user, the three numbers on top probably look familiar — they represent the system load average. From left to right, the numbers represent your system load over the last one, five, and fifteen minutes. This isn’t the kind of thing you’d want enabled most of the time, but it can save you from having to install third-party floating CPU apps if you want to see CPU usage information for some reason. Most of the other options here will only be useful to developers debugging their Android apps. You shouldn’t start changing options you don’t understand. If you want to undo any of these changes, you can quickly erase all your custom options by sliding the switch at the top of the screen to Off.     

    Read the article

  • Monitoring ASP.NET Application

    - by imran_ku07
        Introduction:          There are times when you may need to monitor your ASP.NET application's CPU and memory consumption, so that you can fine-tune your ASP.NET application(whether Web Form, MVC or WebMatrix). Also, sometimes you may need to see all the exceptions(and their details) of your application raising, whether they are handled or not. If you are creating an ASP.NET application in .NET Framework 4.0, then you can easily monitor your application's CPU or memory consumption and see how many exceptions your application raising. In this article I will show you how you can do this.       Description:           With .NET Framework 4.0, you can turn on the monitoring of CPU and memory consumption by setting AppDomain.MonitoringEnabled property to true. Also, in .NET Framework 4.0, you can register a callback method to AppDomain.FirstChanceException event to monitor the exceptions being thrown within your application's AppDomain. Turning on the monitoring and registering a callback method will add some additional overhead to your application, which will hurt your application performance. So it is better to turn on these features only if you have following properties in web.config file,   <add key="AppDomainMonitoringEnabled" value="true"/> <add key="FirstChanceExceptionMonitoringEnabled" value="true"/>             In case if you wonder what does FirstChanceException mean. It simply means the first notification of an exception raised by your application. Even CLR invokes this notification before the catch block that handles the exception. Now just update global.asax.cs file as,   string _item = "__RequestExceptionKey"; protected void Application_Start() { SetupMonitoring(); } private void SetupMonitoring() { bool appDomainMonitoringEnabled, firstChanceExceptionMonitoringEnabled; bool.TryParse(ConfigurationManager.AppSettings["AppDomainMonitoringEnabled"], out appDomainMonitoringEnabled); bool.TryParse(ConfigurationManager.AppSettings["FirstChanceExceptionMonitoringEnabled"], out firstChanceExceptionMonitoringEnabled); if (appDomainMonitoringEnabled) { AppDomain.MonitoringIsEnabled = true; } if (firstChanceExceptionMonitoringEnabled) { AppDomain.CurrentDomain.FirstChanceException += (object source, FirstChanceExceptionEventArgs e) => { if (HttpContext.Current == null)// If no context available, ignore it return; if (HttpContext.Current.Items[_item] == null) HttpContext.Current.Items[_item] = new RequestException { Exceptions = new List<Exception>() }; (HttpContext.Current.Items[_item] as RequestException).Exceptions.Add(e.Exception); }; } } protected void Application_EndRequest() { if (Context.Items[_item] != null) { //Only add the request if atleast one exception is raised var reqExc = Context.Items[_item] as RequestException; reqExc.Url = Request.Url.AbsoluteUri; Application.Lock(); if (Application["AllExc"] == null) Application["AllExc"] = new List<RequestException>(); (Application["AllExc"] as List<RequestException>).Add(reqExc); Application.UnLock(); } }               Now browse to Monitoring.cshtml file, you will see the following screen,                            The above screen shows you the total bytes allocated, total bytes in use and CPU usage of your application. The above screen also shows you all the exceptions raised by your application which is very helpful for you. I have uploaded a sample project on github at here. You can find Monitoring.cshtml file on this sample project. You can use this approach in ASP.NET MVC, ASP.NET WebForm and WebMatrix application.       Summary:          This is very important for administrators/developers to manage and administer their web application after deploying to production server. This article will help administrators/developers to see the memory and CPU usage of their web application. This will also help administrators/developers to see all the exceptions your application is throwing whether they are swallowed or not. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • SQL SERVER – Parsing SSIS Catalog Messages – Notes from the Field #030

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Field series. SQL Server Integration Service (SSIS) is one of the most key essential part of the entire Business Intelligence (BI) story. It is a platform for data integration and workflow applications. The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. In this episode of the Notes from the Field series I requested SSIS Expert Andy Leonard to discuss one of the most interesting concepts of SSIS Catalog Messages. There are plenty of interesting and useful information captured in the SSIS catalog and we will learn together how to explore the same. The SSIS Catalog captures a lot of cool information by default. Here’s a query I use to parse messages from the catalog.operation_messages table in the SSISDB database, where the logged messages are stored. This query is set up to parse a default message transmitted by the Lookup Transformation. It’s one of my favorite messages in the SSIS log because it gives me excellent information when I’m tuning SSIS data flows. The message reads similar to: Data Flow Task:Information: The Lookup processed 4485 rows in the cache. The processing time was 0.015 seconds. The cache used 1376895 bytes of memory. The query: USE SSISDB GO DECLARE @MessageSourceType INT = 60 DECLARE @StartOfIDString VARCHAR(100) = 'The Lookup processed ' DECLARE @ProcessingTimeString VARCHAR(100) = 'The processing time was ' DECLARE @CacheUsedString VARCHAR(100) = 'The cache used ' DECLARE @StartOfIDSearchString VARCHAR(100) = '%' + @StartOfIDString + '%' DECLARE @ProcessingTimeSearchString VARCHAR(100) = '%' + @ProcessingTimeString + '%' DECLARE @CacheUsedSearchString VARCHAR(100) = '%' + @CacheUsedString + '%' SELECT operation_id , SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1))) AS LookupRowsCount , SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))) AS LookupProcessingTime , CASE WHEN (CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1))))) = 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) / CONVERT(numeric(3,3),SUBSTRING(MESSAGE, (PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@ProcessingTimeSearchString,MESSAGE) + LEN(@ProcessingTimeString) + 1)) - (PATINDEX(@ProcessingTimeSearchString, MESSAGE) + LEN(@ProcessingTimeString) + 1)))) END AS LookupRowsPerSecond , SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1))) AS LookupBytesUsed ,CASE WHEN (CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))))= 0 THEN 0 ELSE CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@CacheUsedSearchString,MESSAGE) + LEN(@CacheUsedString) + 1)) - (PATINDEX(@CacheUsedSearchString, MESSAGE) + LEN(@CacheUsedString) + 1)))) / CONVERT(bigint,SUBSTRING(MESSAGE, (PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1), ((CHARINDEX(' ', MESSAGE, PATINDEX(@StartOfIDSearchString,MESSAGE) + LEN(@StartOfIDString) + 1)) - (PATINDEX(@StartOfIDSearchString, MESSAGE) + LEN(@StartOfIDString) + 1)))) END AS LookupBytesPerRow FROM [catalog].[operation_messages] WHERE message_source_type = @MessageSourceType AND MESSAGE LIKE @StartOfIDSearchString GO Note that you have to set some parameter values: @MessageSourceType [int] – represents the message source type value from the following results: Value     Description 10           Entry APIs, such as T-SQL and CLR Stored procedures 20           External process used to run package (ISServerExec.exe) 30           Package-level objects 40           Control Flow tasks 50           Control Flow containers 60           Data Flow task 70           Custom execution message Note: Taken from Reza Rad’s (excellent!) helper.MessageSourceType table found here. @StartOfIDString [VarChar(100)] – use this to uniquely identify the message field value you wish to parse. In this case, the string ‘The Lookup processed ‘ identifies all the Lookup Transformation messages I desire to parse. @ProcessingTimeString [VarChar(100)] – this parameter is message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Processing Time value. For this execution, I use the string ‘The processing time was ‘. @CacheUsedString [VarChar(100)] – this parameter is also message-specific. I use this parameter to specifically search the message field value for the beginning of the Lookup Cache  Used value. It returns the memory used, in bytes. For this execution, I use the string ‘The cache used ‘. The other parameters are built from variations of the parameters listed above. The query parses the values into text. The string values are converted to numeric values for ratio calculations; LookupRowsPerSecond and LookupBytesPerRow. Since ratios involve division, CASE statements check for denominators that equal 0. Here are the results in an SSMS grid: This is not the only way to retrieve this information. And much of the code lends itself to conversion to functions. If there is interest, I will share the functions in an upcoming post. If you want to get started with SSIS with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Efficient way to render tile-based map in Java

    - by Lucius
    Some time ago I posted here because I was having some memory issues with a game I'm working on. That has been pretty much solved thanks to some suggestions here, so I decided to come back with another problem I'm having. Basically, I feel that too much of the CPU is being used when rendering the map. I have a Core i5-2500 processor and when running the game, the CPU usage is about 35% - and I can't accept that that's just how it has to be. This is how I'm going about rendering the map: I have the X and Y coordinates of the player, so I'm not drawing the whole map, just the visible portion of it; The number of visible tiles on screen varies according to the resolution chosen by the player (the CPU usage is 35% here when playing at a resolution of 1440x900); If the tile is "empty", I just skip drawing it (this didn't visibly lower the CPU usage, but reduced the drawing time in about 20ms); The map is composed of 5 layers - for more details; The tiles are 32x32 pixels; And just to be on the safe side, I'll post the code for drawing the game here, although it's as messy and unreadable as it can be T_T (I'll try to make it a little readable) private void drawGame(Graphics2D g2d){ //Width and Height of the visible portion of the map (not of the screen) int visionWidht = visibleCols * TILE_SIZE; int visionHeight = visibleRows * TILE_SIZE; //Since the map can be smaller than the screen, I center it just to be sure int xAdjust = (getWidth() - visionWidht) / 2; int yAdjust = (getHeight() - visionHeight) / 2; //This "deducedX" thing is to move the map a few pixels horizontally, since the player moves by pixels and not full tiles int playerDrawX = listOfCharacters.get(0).getX(); int deducedX = 0; if (listOfCharacters.get(0).currentCol() - visibleCols / 2 >= 0) { playerDrawX = visibleCols / 2 * TILE_SIZE; map_draw_col = listOfCharacters.get(0).currentCol() - visibleCols / 2; deducedX = listOfCharacters.get(0).getXCol(); } //"deducedY" is the same deal as "deducedX", but vertically int playerDrawY = listOfCharacters.get(0).getY(); int deducedY = 0; if (listOfCharacters.get(0).currentRow() - visibleRows / 2 >= 0) { playerDrawY = visibleRows / 2 * TILE_SIZE; map_draw_row = listOfCharacters.get(0).currentRow() - visibleRows / 2; deducedY = listOfCharacters.get(0).getYRow(); } int max_cols = visibleCols + map_draw_col; if (max_cols >= map.getCols()) { max_cols = map.getCols() - 1; deducedX = 0; map_draw_col = max_cols - visibleCols + 1; playerDrawX = listOfCharacters.get(0).getX() - map_draw_col * TILE_SIZE; } int max_rows = visibleRows + map_draw_row; if (max_rows >= map.getRows()) { max_rows = map.getRows() - 1; deducedY = 0; map_draw_row = max_rows - visibleRows + 1; playerDrawY = listOfCharacters.get(0).getY() - map_draw_row * TILE_SIZE; } //map_draw_row and map_draw_col representes the coordinate of the upper left tile on the screen //iterate through all the tiles on screen and draw them - this is what consumes most of the CPU for (int col = map_draw_col; col <= max_cols; col++) { for (int row = map_draw_row; row <= max_rows; row++) { Tile[] tiles = map.getTiles(col, row); for(int layer = 0; layer < tiles.length; layer++){ Tile currentTile = tiles[layer]; boolean shouldDraw = true; //I only draw the tile if it exists and is not empty (id=-1) if(currentTile != null && currentTile.getId() >= 0){ //The layers above 1 can be draw behing or infront of the player according to where it's standing if(layer > 1 && currentTile.getId() >= 0){ if(playerBehind(col, row, layer, listOfCharacters.get(0))){ behinds.get(0).add(new int[]{col, row}); //the tiles that are infront of the player wont be draw right now shouldDraw = false; } } if(shouldDraw){ g2d.drawImage( tiles[layer].getImage(), (col-map_draw_col)*TILE_SIZE - deducedX + xAdjust, (row-map_draw_row)*TILE_SIZE - deducedY + yAdjust, null); } } } } } } There's some more code in this method but nothing relevant to this question. Basically, the biggest problem is that I iterate over around 5000 tiles (in this specific resolution) 60 times each second. I thought about rendering the visible portion of the map once and storing it into a BufferedImage and when the player moved move the whole image the same amount but to the opposite side and then drawn the tiles that appeared on the screen, but if I do it like that, I wont be able to have animated tiles (at least I think). That being said, any suggestions?

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • Hibernate unable to instantiate default tuplizer - cannot find getter

    - by ZeldaPinwheel
    I'm trying to use Hibernate to persist a class that looks like this: public class Item implements Serializable, Comparable<Item> { // Item id private Integer id; // Description of item in inventory private String description; // Number of items described by this inventory item private int count; //Category item belongs to private String category; // Date item was purchased private GregorianCalendar purchaseDate; public Item() { } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public int getCount() { return count; } public void setCount(int count) { this.count = count; } public String getCategory() { return category; } public void setCategory(String category) { this.category = category; } public GregorianCalendar getPurchaseDate() { return purchaseDate; } public void setPurchasedate(GregorianCalendar purchaseDate) { this.purchaseDate = purchaseDate; } My Hibernate mapping file contains the following: <property name="puchaseDate" type="java.util.GregorianCalendar"> <column name="purchase_date"></column> </property> When I try to run, I get error messages indicating there is no getter function for the purchaseDate attribute: 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 20 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/home_inventory 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=root, password=****} 1078 [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: MySQL, version: 5.1.45 1078 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-5.1.12 ( Revision: ${bzr.revision-id} ) 1103 [main] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.MySQLDialect 1107 [main] INFO org.hibernate.engine.jdbc.JdbcSupportLoader - Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4 1109 [main] INFO org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 1110 [main] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Maximum outer join fetch depth: 2 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 1112 [main] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 1113 [main] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 1114 [main] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 1117 [main] INFO org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Check Nullability in Core (should be disabled when Bean Validation is on): enabled 1151 [main] INFO org.hibernate.impl.SessionFactoryImpl - building session factory org.hibernate.HibernateException: Unable to instantiate default tuplizer [org.hibernate.tuple.entity.PojoEntityTuplizer] at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:110) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructDefaultTuplizer(EntityTuplizerFactory.java:135) at org.hibernate.tuple.entity.EntityEntityModeToTuplizerMapping.<init>(EntityEntityModeToTuplizerMapping.java:80) at org.hibernate.tuple.entity.EntityMetamodel.<init>(EntityMetamodel.java:323) at org.hibernate.persister.entity.AbstractEntityPersister.<init>(AbstractEntityPersister.java:475) at org.hibernate.persister.entity.SingleTableEntityPersister.<init>(SingleTableEntityPersister.java:133) at org.hibernate.persister.PersisterFactory.createClassPersister(PersisterFactory.java:84) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:295) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1385) at service.HibernateSessionFactory.currentSession(HibernateSessionFactory.java:53) at service.ItemSvcHibImpl.generateReport(ItemSvcHibImpl.java:78) at service.test.ItemSvcTest.testGenerateReport(ItemSvcTest.java:226) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:107) ... 29 more Caused by: org.hibernate.PropertyNotFoundException: Could not find a getter for puchaseDate in class domain.Item at org.hibernate.property.BasicPropertyAccessor.createGetter(BasicPropertyAccessor.java:328) at org.hibernate.property.BasicPropertyAccessor.getGetter(BasicPropertyAccessor.java:321) at org.hibernate.mapping.Property.getGetter(Property.java:304) at org.hibernate.tuple.entity.PojoEntityTuplizer.buildPropertyGetter(PojoEntityTuplizer.java:299) at org.hibernate.tuple.entity.AbstractEntityTuplizer.<init>(AbstractEntityTuplizer.java:158) at org.hibernate.tuple.entity.PojoEntityTuplizer.<init>(PojoEntityTuplizer.java:77) ... 34 more I'm new to Hibernate, so I don't know all the ins and outs, but I do have the getter and setter for the purchaseDate attribute. I don't know what I'm missing here - does anyone else? Thanks!

    Read the article

  • Hibernate : load and

    - by Albert Kam
    According to the tutorial : http://jpa.ezhibernate.com/Javacode/learn.jsp?tutorial=27hibernateloadvshibernateget, If you initialize a JavaBean instance with a load method call, you can only access the properties of that JavaBean, for the first time, within the transactional context in which it was initialized. If you try to access the various properties of the JavaBean after the transaction that loaded it has been committed, you'll get an exception, a LazyInitializationException, as Hibernate no longer has a valid transactional context to use to hit the database. But with my experiment, using hibernate 3.6, and postgres 9, it doesnt throw any exception at all. Am i missing something ? Here's my code : import org.hibernate.Session; public class AppLoadingEntities { /** * @param args */ public static void main(String[] args) { HibernateUtil.beginTransaction(); Session session = HibernateUtil.getSession(); EntityUser userFromGet = get(session); EntityUser userFromLoad = load(session); // finish the transaction session.getTransaction().commit(); // try fetching field value from entity bean that is fetched via get outside transaction, and it'll be okay System.out.println("userFromGet.getId() : " + userFromGet.getId()); System.out.println("userFromGet.getName() : " + userFromGet.getName()); // fetching field from entity bean that is fetched via load outside transaction, and it'll be errornous // NOTE : but after testing, load seems to be okay, what gives ? ask forums try { System.out.println("userFromLoad.getId() : " + userFromLoad.getId()); System.out.println("userFromLoad.getName() : " + userFromLoad.getName()); } catch(Exception e) { System.out.println("error while fetching entity that is fetched from load : " + e.getMessage()); } } private static EntityUser load(Session session) { EntityUser user = (EntityUser) session.load(EntityUser.class, 1l); System.out.println("user fetched with 'load' inside transaction : " + user); return user; } private static EntityUser get(Session session) { // safe to set it to 1, coz the table got recreated at every run of this app EntityUser user = (EntityUser) session.get(EntityUser.class, 1l); System.out.println("user fetched with 'get' : " + user); return user; } } And here's the output : 88 [main] INFO org.hibernate.annotations.common.Version - Hibernate Commons Annotations 3.2.0.Final 93 [main] INFO org.hibernate.cfg.Environment - Hibernate 3.6.0.Final 94 [main] INFO org.hibernate.cfg.Environment - hibernate.properties not found 96 [main] INFO org.hibernate.cfg.Environment - Bytecode provider name : javassist 98 [main] INFO org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling 139 [main] INFO org.hibernate.cfg.Configuration - configuring from resource: /hibernate.cfg.xml 139 [main] INFO org.hibernate.cfg.Configuration - Configuration resource: /hibernate.cfg.xml 172 [main] WARN org.hibernate.util.DTDEntityResolver - recognized obsolete hibernate namespace http://hibernate.sourceforge.net/. Use namespace http://www.hibernate.org/dtd/ instead. Refer to Hibernate 3.6 Migration Guide! 191 [main] INFO org.hibernate.cfg.Configuration - Configured SessionFactory: null 237 [main] INFO org.hibernate.cfg.AnnotationBinder - Binding entity from annotated class: EntityUser 263 [main] INFO org.hibernate.cfg.annotations.EntityBinder - Bind entity EntityUser on table MstUser 293 [main] INFO org.hibernate.cfg.Configuration - Hibernate Validator not found: ignoring 296 [main] INFO org.hibernate.cfg.search.HibernateSearchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 300 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 300 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 20 300 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 309 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: org.postgresql.Driver at URL: jdbc:postgresql://localhost:5432/hibernate 309 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=sofco, password=****} 354 [main] INFO org.hibernate.cfg.SettingsFactory - Database -> name : PostgreSQL version : 9.0.1 major : 9 minor : 0 354 [main] INFO org.hibernate.cfg.SettingsFactory - Driver -> name : PostgreSQL Native Driver version : PostgreSQL 9.0 JDBC4 (build 801) major : 9 minor : 0 372 [main] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect 382 [main] INFO org.hibernate.engine.jdbc.JdbcSupportLoader - Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException 383 [main] INFO org.hibernate.transaction.TransactionFactoryFactory - Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 384 [main] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 384 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 384 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 384 [main] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 385 [main] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 385 [main] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 385 [main] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 385 [main] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 385 [main] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 385 [main] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory 386 [main] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 386 [main] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 388 [main] INFO org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 389 [main] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 389 [main] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 389 [main] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 389 [main] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 389 [main] INFO org.hibernate.cfg.SettingsFactory - Check Nullability in Core (should be disabled when Bean Validation is on): enabled 402 [main] INFO org.hibernate.impl.SessionFactoryImpl - building session factory 549 [main] INFO org.hibernate.impl.SessionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured Hibernate: select entityuser0_.id as id0_0_, entityuser0_.name as name0_0_, entityuser0_.password as password0_0_ from MstUser entityuser0_ where entityuser0_.id=? user fetched with 'get' : 1:Albert Kam xzy:abc user fetched with 'load' inside transaction : 1:Albert Kam xzy:abc userFromGet.getId() : 1 userFromGet.getName() : Albert Kam xzy userFromLoad.getId() : 1 userFromLoad.getName() : Albert Kam xzy

    Read the article

  • Asp.Net MVC and ajax async callback execution order

    - by lrb
    I have been sorting through this issue all day and hope someone can help pinpoint my problem. I have created a "asynchronous progress callback" type functionality in my app using ajax. When I strip the functionality out into a test application I get the desired results. See image below: Desired Functionality When I tie the functionality into my single page application using the same code I get a sort of blocking issue where all requests are responded to only after the last task has completed. In the test app above all request are responded to in order. The server reports a ("pending") state for all requests until the controller method has completed. Can anyone give me a hint as to what could cause the change in behavior? Not Desired Desired Fiddler Request/Response GET http://localhost:12028/task/status?_=1383333945335 HTTP/1.1 X-ProgressBar-TaskId: 892183768 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:12028/ Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:12028 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 3.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcVEVNUFxQcm9ncmVzc0Jhclx0YXNrXHN0YXR1cw==?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:39:08 GMT Content-Length: 25 Iteration completed... Not Desired Fiddler Request/Response GET http://localhost:60171/_Test/status?_=1383341766884 HTTP/1.1 X-ProgressBar-TaskId: 838217998 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:60171/Report/Index Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:60171 Pragma: no-cache Cookie: ASP.NET_SessionId=rjli2jb0wyjrgxjqjsicdhdi; AspxAutoDetectCookieSupport=1; TTREPORTS_1_0=CC2A501EF499F9F...; __RequestVerificationToken=6klOoK6lSXR51zCVaDNhuaF6Blual0l8_JH1QTW9W6L-3LroNbyi6WvN6qiqv-PjqpCy7oEmNnAd9s0UONASmBQhUu8aechFYq7EXKzu7WSybObivq46djrE1lvkm6hNXgeLNLYmV0ORmGJeLWDyvA2 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcSUxlYXJuLlJlcG9ydHMuV2ViXHRydW5rXElMZWFybi5SZXBvcnRzLldlYlxfVGVzdFxzdGF0dXM=?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:37:48 GMT Content-Length: 25 Iteration completed... The only difference in the two requests headers besides the auth tokens is "Pragma: no-cache" in the request and the asp.net version in the response. Thanks Update - Code posted (I probably need to indicate this code originated from an article by Dino Esposito ) var ilProgressWorker = function () { var that = {}; that._xhr = null; that._taskId = 0; that._timerId = 0; that._progressUrl = ""; that._abortUrl = ""; that._interval = 500; that._userDefinedProgressCallback = null; that._taskCompletedCallback = null; that._taskAbortedCallback = null; that.createTaskId = function () { var _minNumber = 100, _maxNumber = 1000000000; return _minNumber + Math.floor(Math.random() * _maxNumber); }; // Set progress callback that.callback = function (userCallback, completedCallback, abortedCallback) { that._userDefinedProgressCallback = userCallback; that._taskCompletedCallback = completedCallback; that._taskAbortedCallback = abortedCallback; return this; }; // Set frequency of refresh that.setInterval = function (interval) { that._interval = interval; return this; }; // Abort the operation that.abort = function () { // if (_xhr !== null) // _xhr.abort(); if (that._abortUrl != null && that._abortUrl != "") { $.ajax({ url: that._abortUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId } }); } }; // INTERNAL FUNCTION that._internalProgressCallback = function () { that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); $.ajax({ url: that._progressUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, success: function (status) { if (that._userDefinedProgressCallback != null) that._userDefinedProgressCallback(status); }, complete: function (data) { var i=0; }, }); }; // Invoke the URL and monitor its progress that.start = function (url, progressUrl, abortUrl) { that._taskId = that.createTaskId(); that._progressUrl = progressUrl; that._abortUrl = abortUrl; // Place the Ajax call _xhr = $.ajax({ url: url, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, complete: function () { if (_xhr.status != 0) return; if (that._taskAbortedCallback != null) that._taskAbortedCallback(); that.end(); }, success: function (data) { if (that._taskCompletedCallback != null) that._taskCompletedCallback(data); that.end(); } }); // Start the progress callback (if any) if (that._userDefinedProgressCallback == null || that._progressUrl === "") return this; that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); }; // Finalize the task that.end = function () { that._taskId = 0; window.clearTimeout(that._timerId); } return that; };

    Read the article

  • Help to edit the Recent Posts Wordpress widget to diplay in all 3 languages at once

    - by CreativEliza
    Site link: http://nuestrafrontera.org/wordpress/ I want the feed of recent post titles to show in the sidebar for all 3 languages, separated by language. So, for example, under Recent Posts the sidebar would have "English" and then the latest 3 posts in English, then "Español" and the latest 3 in Spanish and then French. All in a list in the column and appearing on all pages with the sidebar in all languages. I am using the most current version of Wordpress with the WPML plugin. I believe the Wordpress widget for Recent Posts needs to be tweaked to do this. Here is the code (from wp-includes/default-widgets.php): class WP_Widget_Recent_Posts extends WP_Widget { function WP_Widget_Recent_Posts() { $widget_ops = array('classname' => 'widget_recent_entries', 'description' => __( "The most recent posts on your blog") ); $this->WP_Widget('recent-posts', __('Recent Posts'), $widget_ops); $this->alt_option_name = 'widget_recent_entries'; add_action( 'save_post', array(&$this, 'flush_widget_cache') ); add_action( 'deleted_post', array(&$this, 'flush_widget_cache') ); add_action( 'switch_theme', array(&$this, 'flush_widget_cache') ); } function widget($args, $instance) { $cache = wp_cache_get('widget_recent_posts', 'widget'); if ( !is_array($cache) ) $cache = array(); if ( isset($cache[$args['widget_id']]) ) { echo $cache[$args['widget_id']]; return; } ob_start(); extract($args); $title = apply_filters('widget_title', empty($instance['title']) ? __('Recent Posts') : $instance['title']); if ( !$number = (int) $instance['number'] ) $number = 10; else if ( $number < 1 ) $number = 1; else if ( $number > 15 ) $number = 15; $r = new WP_Query(array('showposts' => $number, 'nopaging' => 0, 'post_status' => 'publish', 'caller_get_posts' => 1)); if ($r->have_posts()) : ?> <?php echo $before_widget; ?> <?php if ( $title ) echo $before_title . $title . $after_title; ?> <ul> <?php while ($r->have_posts()) : $r->the_post(); ?> <li><a href="<?php the_permalink() ?>" title="<?php echo esc_attr(get_the_title() ? get_the_title() : get_the_ID()); ?>"><?php if ( get_the_title() ) the_title(); else the_ID(); ?> </a></li> <?php endwhile; ?> </ul> <?php echo $after_widget; ?> <?php wp_reset_query(); // Restore global post data stomped by the_post(). endif; $cache[$args['widget_id']] = ob_get_flush(); wp_cache_add('widget_recent_posts', $cache, 'widget'); } function update( $new_instance, $old_instance ) { $instance = $old_instance; $instance['title'] = strip_tags($new_instance['title']); $instance['number'] = (int) $new_instance['number']; $this->flush_widget_cache(); $alloptions = wp_cache_get( 'alloptions', 'options' ); if ( isset($alloptions['widget_recent_entries']) ) delete_option('widget_recent_entries'); return $instance; } function flush_widget_cache() { wp_cache_delete('widget_recent_posts', 'widget'); } function form( $instance ) { $title = esc_attr($instance['title']); if ( !$number = (int) $instance['number'] ) $number = 5; ?> <p><label for="<?php echo $this->get_field_id('title'); ?>"><?php _e('Title:'); ?></label> <input class="widefat" id="<?php echo $this->get_field_id('title'); ?>" name="<?php echo $this->get_field_name('title'); ?>" type="text" value="<?php echo $title; ?>" /></p> <p><label for="<?php echo $this->get_field_id('number'); ?>"><?php _e('Number of posts to show:'); ?></label> <input id="<?php echo $this->get_field_id('number'); ?>" name="<?php echo $this->get_field_name('number'); ?>" type="text" value="<?php echo $number; ?>" size="3" /><br /> <small><?php _e('(at most 15)'); ?></small></p> <?php } }

    Read the article

  • "power limit notification" clobbering on 12G Dell servers with RHEL6

    - by Andrew B
    Server: Poweredge r620 OS: RHEL 6.4 Kernel: 2.6.32-358.18.1.el6.x86_64 I'm experiencing application alarms in my production environment. Critical CPU hungry processes are being starved of resources and causing a processing backlog. The problem is happening on all the 12th Generation Dell servers (r620s) in a recently deployed cluster. As near as I can tell, instances of this happening are matching up to peak CPU utilization, accompanied by massive amounts of "power limit notification" spam in dmesg. An excerpt of one of these events: Nov 7 10:15:15 someserver [.crit] CPU12: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU6: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU14: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Package power limit notification (total events = 11) Nov 7 10:15:15 someserver [.crit] CPU6: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU14: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU20: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Package power limit notification (total events = 12) Nov 7 10:15:15 someserver [.crit] CPU10: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU20: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU10: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU15: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU3: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU1: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU5: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU15: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU3: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU1: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU5: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU19: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Package power limit notification (total events = 377) Nov 7 10:15:15 someserver [.crit] CPU9: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU21: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU23: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU11: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU19: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU9: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU21: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU23: Package power limit notification (total events = 374) A little Google Fu reveals that this is typically associated with the CPU running hot, or voltage regulation kicking in. I don't think that's what is happening though. Temperature sensors for all servers in the cluster are running fine, Power Cap Policy is disabled in the iDRAC, and my System Profile is set to "Performance" on all of these servers: # omreport chassis biossetup | grep -A10 'System Profile' System Profile Settings ------------------------------------------ System Profile : Performance CPU Power Management : Maximum Performance Memory Frequency : Maximum Performance Turbo Boost : Enabled C1E : Disabled C States : Disabled Monitor/Mwait : Enabled Memory Patrol Scrub : Standard Memory Refresh Rate : 1x Memory Operating Voltage : Auto Collaborative CPU Performance Control : Disabled A Dell mailing list post describes the symptoms almost perfectly. Dell suggested that the author try using the Performance profile, but that didn't help. He ended up applying some settings in Dell's guide for configuring a server for low latency environments and one of those settings (or a combination thereof) seems to have fixed the problem. Kernel.org bug #36182 notes that power-limit interrupt debugging was enabled by default, which is causing performance degradation in scenarios where CPU voltage regulation is kicking in. A RHN KB article (RHN login required) mentions a problem impacting PE r620 and r720 servers not running the Performance profile, and recommends an update to a kernel released two weeks ago. ...Except we are running the Performance profile... Everything I can find online is running me in circles here. What's the heck is going on?

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >