Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 817/844 | < Previous Page | 813 814 815 816 817 818 819 820 821 822 823 824  | Next Page >

  • Cisco: unable to negotiate IP using IPCP with Windows server

    - by lnk
    I am connecting to Windows server using PPP (for vpn), I establish connection but server does not respond me for my address requests: *Mar 23 00:40:06.055: Vi1 MS-CHAP-V2: I CHALLENGE id 0 len 25 from "MSDC" *Mar 23 00:40:06.063: Vi1 MS CHAP V2: Using hostname from interface CHAP *Mar 23 00:40:06.063: Vi1 MS CHAP V2: Using password from interface CHAP *Mar 23 00:40:06.067: Vi1 MS-CHAP-V2: O RESPONSE id 0 len 69 from "XXX" *Mar 23 00:40:06.087: Vi1 PPP: I pkt type 0xC223, datagramsize 50 link[ppp] *Mar 23 00:40:06.087: Vi1 MS-CHAP-V2: I SUCCESS id 0 len 46 msg is "S=XXX" *Mar 23 00:40:06.087: Vi1 MS CHAP V2 No Password found for : XXX *Mar 23 00:40:06.091: Vi1 MS CHAP V2 Check AuthenticatorResponse Success for : XXX *Mar 23 00:40:06.091: Vi1 IPCP: O CONFREQ [Closed] id 1 len 20 *Mar 23 00:40:06.091: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:06.091: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:07.091: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access1, changed state to up *Mar 23 00:40:07.091: Vi1 LCP: O ECHOREQ [Open] id 1 len 12 magic 0x194CAFCF *Mar 23 00:40:07.103: Vi1 LCP-FS: I ECHOREP [Open] id 1 len 12 magic 0x361B62E5 *Mar 23 00:40:07.103: Vi1 LCP-FS: Received id 1, sent id 1, line up *Mar 23 00:40:08.083: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:08.083: Vi1 IPCP: O CONFREQ [REQsent] id 2 len 20 *Mar 23 00:40:08.083: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:08.083: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:10.099: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:10.099: Vi1 IPCP: O CONFREQ [REQsent] id 3 len 20 *Mar 23 00:40:10.099: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:10.099: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:12.115: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:12.115: Vi1 IPCP: O CONFREQ [REQsent] id 4 len 20 *Mar 23 00:40:12.115: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:12.115: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:12.211: Vi1 LCP: O ECHOREQ [Open] id 2 len 12 magic 0x194CAFCF *Mar 23 00:40:12.219: Vi1 LCP-FS: I ECHOREP [Open] id 2 len 12 magic 0x361B62E5 *Mar 23 00:40:12.219: Vi1 LCP-FS: Received id 2, sent id 2, line up *Mar 23 00:40:14.131: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:14.131: Vi1 IPCP: O CONFREQ [REQsent] id 5 len 20 *Mar 23 00:40:14.131: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:14.131: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:16.147: Vi1 IPCP: TIMEout: State REQsent *Mar 23 00:40:16.147: Vi1 IPCP: O CONFREQ [REQsent] id 6 len 20 *Mar 23 00:40:16.147: Vi1 IPCP: VSO OUI 0x00000C kind 1 (0x000A00000C0100000000) *Mar 23 00:40:16.147: Vi1 IPCP: Address 0.0.0.0 (0x030600000000) *Mar 23 00:40:17.331: Vi1 LCP: O ECHOREQ [Open] id 3 len 12 magic 0x194CAFCF *Mar 23 00:40:17.343: Vi1 LCP-FS: I ECHOREP [Open] id 3 len 12 magic 0x361B62E5 *Mar 23 00:40:17.343: Vi1 LCP-FS: Received id 3, sent id 3, line up You see: My router asks for address, but only keepalives are on line. But the same server works with windows client!! ! version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption service internal ! hostname Router ! boot-start-marker boot-end-marker ! ! no aaa new-model ! resource policy ! ip subnet-zero ! ! ip cef vpdn enable ! vpdn-group pptp request-dialin protocol pptp pool-member 1 initiate-to ip XXXX ! ! ! ! ! ! ! bridge irb ! ! interface ATM0 no ip address shutdown no atm ilmi-keepalive dsl operating-mode auto ! interface FastEthernet0 ! interface FastEthernet1 ! interface FastEthernet2 ! interface FastEthernet3 ! interface Dot11Radio0 no ip address shutdown speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root ! interface Vlan1 no ip address bridge-group 1 ! interface Dialer0 ip address negotiated encapsulation ppp dialer pool 1 dialer idle-timeout 0 dialer string XXX dialer persistent dialer vpdn dialer-group 1 keepalive 5 3 no cdp enable ppp authentication ms-chap-v2 optional ppp eap refuse ppp chap hostname XXX ppp chap password 0 XXX ppp ipcp mask request ppp ipcp ignore-map ppp ipcp address accept ! interface BVI1 mac-address XXX.XXX.XXX ip address dhcp ! ip classless ip route 172.0.0.0 255.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! dialer-list 1 protocol ip permit ! control-plane ! bridge 1 protocol vlan-bridge bridge 1 route ip ! line con 0 no modem enable line aux 0 line vty 0 4 login ! scheduler max-task-time 5000 end

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • firefox, opera 'The connection was reset' on few POST method calls on Windows and Ubuntu

    - by Gopalakrishnan Subramani
    my website works well with GET method, also few POST methods. Some pages with POST method doesn't work. Some pages with POST work. For example, login page uses POST that works fine. When I post the data on webpage, firefox says "Connecting..." and finally report connection timed out error. The same behavior happens with Opera as well. However Google Chrome works fine. At the server side, I use nginx 1.2.4 with HTTPS and uwsgi for python (flask framework) app. I use geotrust certificate. The same behavior happens with Windows 7 and Ubuntu 12.04 on firefox. I tried firefox in safemode, but no luck. Set auto-detect proxy settings. no luck. Cleared all cookies. no luck Anyone help me to fix this issue? I am posting ngix config. shame on me. I use root, I know which is not advised. need to fix soon. user root; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; server { listen 80; server_name www.example.com; rewrite ^(.*) https://example.com$1 permanent; } server { listen 80; server_name example.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; server_name example.com; keepalive_timeout 70; ssl on; ssl_certificate /root/cc.cert; ssl_certificate_key /root/cc.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; #ssl_ciphers HIGH:!aNULL:!MD5; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; } } } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • How to use CLEAR USB WiMax in Ubuntu (host) and Windows XP (guest) using VirtualBox

    - by bithacker
    I'm trying to use CLEAR Motorola WiMax USB in Ubuntu as there is no support for Linux as yet. I've installed Windows XP as guest in Ubuntu and the version I'm using is 3.2.2. USB is connecting fine in Windows XP but I can't use internet in Ubuntu. Can you please tell me how to do it. Here is the configuration that could help you guys. Thanks in advance. I'm using Two Network Adapters. Network Adapter 1: PCnet-FAST III (NAT) Adapter 2: PCnet-FAST III (Host-only adapter, 'vboxnet0') ipconfig [on Guest windowsXP] Windows IP Configuration Ethernet adapter Local Area Connection: PCnet-FAST III (NAT) Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 10.0.2.15 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.0.2.2 Ethernet adapter Local Area Connection 3: PCnet-FAST III (Host-only adapter, 'vboxnet0') Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 192.168.56.101 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : CLEAR Motorola USB IP Address. . . . . . . . . . . . : 10.168.242.33 Subnet Mask . . . . . . . . . . . : 255.255.192.0 Default Gateway . . . . . . . . . : 10.168.192.2 IFCONFIG [on Host Ubuntu] (Ethernet) eth0 Link encap:Ethernet HWaddr 00:14:22:b9:9d:76 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 eth1 (Wireless) Link encap:Ethernet HWaddr 00:13:ce:f0:9b:0d inet6 addr: fe80::213:ceff:fef0:9b0d/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:84 (84.0 B) Interrupt:17 Base address:0xe000 Memory:dfcff000-dfcfffff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2292 errors:0 dropped:0 overruns:0 frame:0 TX packets:2292 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:171952 (171.9 KB) TX bytes:171952 (171.9 KB) vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00 inet addr:192.168.56.1 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::800:27ff:fe00:0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:137 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:21174 (21.1 KB)

    Read the article

  • Screen Casting using ffmpeg (too fast)

    - by rowman
    I can use ffmpeg to make screen casts: ffmpeg -f x11grab -s 1280x800 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv However the output comes out to be too fast paced. It also happens with GTK RecordMyDesktop if I enable the encode on the fly. So, the questions is how to get a normal video pace. Also in order to capture the sound with ffmpeg what option should be used? FFmpeg Output: ffmpeg -f x11grab -s 1280x800 -r 30 -i :0.0 -c:v libx264 -framerate 30 -r 30 -crf 18 out.mkv ffmpeg version N-35162-g87244c8 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 7 2012 15:56:19 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 51. 73.102 / 51. 73.102 libavcodec 54. 64.100 / 54. 64.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 [x11grab @ 0xab896a0] device: :0.0 -> display: :0.0 x: 0 y: 0 width: 1280 height: 800 [x11grab @ 0xab896a0] shared memory extension found [x11grab @ 0xab896a0] Estimating duration from bitrate, this may be inaccurate Input #0, x11grab, from ':0.0': Duration: N/A, start: 1350136942.608988, bitrate: 983040 kb/s Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1280x800, 983040 kb/s, 30 tbr, 1000k tbn, 30 tbc [libx264 @ 0xab87320] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 SlowCTZ SlowAtom [libx264 @ 0xab87320] profile High 4:4:4 Predictive, level 3.2, 4:4:4 8-bit [libx264 @ 0xab87320] 264 - core 128 r2 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, matroska, to 'out.mkv': Metadata: encoder : Lavf54.29.105 Stream #0:0: Video: h264, yuv444p, 1280x800, q=-1--1, 1k tbn, 30 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Press [q] to stop, [?] for help frame= 10 fps=0.0 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 19 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 28 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 37 fps= 17 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 45 fps= 16 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 47 fps= 14 q=0.0 size= 1kB time=00:00:00.00 bitrate= 0.0kbits/sframe= 52 fps= 13 q=24.0 size= 257kB time=00:00:00.00 bitrate=2101632.0kbiframe= 55 fps= 12 q=24.0 size= 257kB time=00:00:00.10 bitrate=20808.2kbitsframe= 59 fps= 11 q=24.0 size= 289kB time=00:00:00.23 bitrate=10145.0kbitsframe= 64 fps= 11 q=24.0 size= 289kB time=00:00:00.40 bitrate=5894.7kbits/frame= 70 fps= 11 q=24.0 size= 289kB time=00:00:00.60 bitrate=3933.1kbits/frame= 72 fps= 10 q=24.0 size= 289kB time=00:00:00.66 bitrate=3549.2kbits/frame= 77 fps=9.8 q=24.0 size= 289kB time=00:00:00.83 bitrate=2837.7kbits/frame= 80 fps=9.6 q=24.0 size= 289kB time=00:00:00.93 bitrate=2533.5kbits/frame= 85 fps=9.3 q=24.0 size= 289kB time=00:00:01.10 bitrate=2146.9kbits/frame= 89 fps=9.3 q=24.0 size= 289kB time=00:00:01.23 bitrate=1917.1kbits/frame= 92 fps=9.1 q=24.0 size= 289kB time=00:00:01.33 bitrate=1773.3kbits/frame= 96 fps=9.0 q=24.0 size= 289kB time=00:00:01.46 bitrate=1612.4kbits/frame= 99 fps=8.8 q=24.0 size= 321kB time=00:00:01.56 bitrate=1676.8kbits/frame= 104 fps=8.7 q=24.0 size= 321kB time=00:00:01.73 bitrate=1515.2kbits/frame= 109 fps=5.3 q=24.0 Lsize= 1093kB time=00:00:03.56 bitrate=2511.5kbits/s video:1092kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.120198% [libx264 @ 0xab87320] frame I:3 Avg QP:18.93 size:142610 [libx264 @ 0xab87320] frame P:43 Avg QP:20.79 size: 15751 [libx264 @ 0xab87320] frame B:63 Avg QP:23.75 size: 195 [libx264 @ 0xab87320] consecutive B-frames: 21.1% 1.8% 11.0% 66.1% [libx264 @ 0xab87320] mb I I16..4: 50.0% 21.1% 28.9% [libx264 @ 0xab87320] mb P I16..4: 6.1% 0.9% 3.2% P16..4: 5.5% 1.2% 0.6% 0.0% 0.0% skip:82.5% [libx264 @ 0xab87320] mb B I16..4: 0.4% 0.1% 0.0% B16..8: 2.9% 0.1% 0.0% direct: 0.0% skip:96.5% L0:40.7% L1:57.0% BI: 2.3% [libx264 @ 0xab87320] 8x8 transform intra:14.5% inter:46.1% [libx264 @ 0xab87320] coded y,u,v intra: 33.5% 24.1% 25.4% inter: 0.9% 0.4% 0.4% [libx264 @ 0xab87320] i16 v,h,dc,p: 70% 26% 1% 3% [libx264 @ 0xab87320] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 11% 21% 30% 5% 7% 5% 7% 4% 10% [libx264 @ 0xab87320] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 35% 12% 2% 4% 3% 4% 3% 5% [libx264 @ 0xab87320] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0xab87320] ref P L0: 57.0% 5.6% 26.8% 10.6% [libx264 @ 0xab87320] ref B L0: 69.4% 22.6% 8.0% [libx264 @ 0xab87320] ref B L1: 93.7% 6.3% [libx264 @ 0xab87320] kb/s:2460.40

    Read the article

  • I added some options to stop spam with Postfix, but now won't send email to remote domains

    - by willdanceforfun
    I had a working Postfix server, but added a few lines to my main.cf in a hope to block some common spam. Those lines I added were: smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit It appears my postfix is now receiving normal emails fine, and blocking spam emails. But when I now try to use this server myself to send to a remote domain (an email not on my server) I get bounced, with maillog saying something like this: Nov 12 06:19:36 srv postfix/smtpd[11756]: NOQUEUE: reject: RCPT from unknown[xx.xx.x.xxx]: 450 4.1.2 <[email protected]>: Recipient address rejected: Domain not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[192.168.1.100]> Is that saying 'domain not found' for gmail.com? Why is that recipient address rejected? An output of my postconf-n is: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = primarydomain.net myhostname = mail.primarydomain.net myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = $mydestination, primarydomain.net, secondarydomain.org sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_client_restrictions = permit_sasl_authenticated smtpd_helo_required = yes smtpd_recipient_restrictions = reject_invalid_hostname, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, permit smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_domains = mail.secondarydomain.org virtual_alias_maps = hash:/etc/postfix/virtual Any insight greatly appreciated. Edit: here is the dig mx gmail.com from the server: ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 <<>> mx gmail.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31766 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 4, ADDITIONAL: 14 ;; QUESTION SECTION: ;gmail.com. IN MX ;; ANSWER SECTION: gmail.com. 1207 IN MX 5 gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 30 alt3.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 20 alt2.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 40 alt4.gmail-smtp-in.l.google.com. gmail.com. 1207 IN MX 10 alt1.gmail-smtp-in.l.google.com. ;; AUTHORITY SECTION: gmail.com. 109168 IN NS ns1.google.com. gmail.com. 109168 IN NS ns4.google.com. gmail.com. 109168 IN NS ns3.google.com. gmail.com. 109168 IN NS ns2.google.com. ;; ADDITIONAL SECTION: alt1.gmail-smtp-in.l.google.com. 207 IN A 173.194.70.27 alt1.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4001:c02::1b gmail-smtp-in.l.google.com. 200 IN A 173.194.67.26 gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:400c:c05::1b alt3.gmail-smtp-in.l.google.com. 207 IN A 74.125.143.27 alt3.gmail-smtp-in.l.google.com. 249 IN AAAA 2a00:1450:400c:c05::1b alt2.gmail-smtp-in.l.google.com. 207 IN A 173.194.69.27 alt2.gmail-smtp-in.l.google.com. 248 IN AAAA 2a00:1450:4008:c01::1b alt4.gmail-smtp-in.l.google.com. 207 IN A 173.194.79.27 alt4.gmail-smtp-in.l.google.com. 249 IN AAAA 2607:f8b0:400e:c01::1a ns2.google.com. 281970 IN A 216.239.34.10 ns3.google.com. 281970 IN A 216.239.36.10 ns4.google.com. 281970 IN A 216.239.38.10 ns1.google.com. 281970 IN A 216.239.32.10

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • LXC Container Networking

    - by digitaladdictions
    I just started to experiment with LXC containers. I was able to create a container and start it up but I cannot get dhcp to assign the container an IP address. If I assign a static address the container can ping the host IP but not outside the host IP. The host is CentOS 6.5 and the guest is Ubuntu 14.04LTS. I used the template downloaded by lxc-create -t download -n cn-01 command. If I am trying to get an IP address on the same subnet as the host I don't believe I should need the IP tables rule for masquerading but I added it anyways. Same with IP forwarding. I compiled LXC by hand from the following source https://linuxcontainers.org/downloads/lxc-1.0.4.tar.gz Host Operating System Version #> cat /etc/redhat-release CentOS release 6.5 (Final) #> uname -a Linux localhost.localdomain 2.6.32-431.20.3.el6.x86_64 #1 SMP Thu Jun 19 21:14:45 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Container Config #> cat /usr/local/var/lib/lxc/cn-01/config # Template used to create this container: /usr/local/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # Distribution configuration lxc.include = /usr/local/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /usr/local/var/lib/lxc/cn-01/rootfs lxc.utsname = cn-01 # Network configuration lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 LXC default.confu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:f #> cat /usr/local/etc/lxc/default.conf lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up #> lxc-checkconfig Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-2.6.32-431.20.3.el6.x86_64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: /usr/local/bin/lxc-checkconfig: line 103: [: too many arguments enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: /usr/local/bin/lxc-checkconfig: line 118: [: -gt: unary operator expected Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/local/bin/lxc-checkconfig Network Config (HOST) #> cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes #> cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0 #> cat /etc/networks default 0.0.0.0 loopback 127.0.0.0 link-local 169.254.0.0 #> ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 42:7e:43:b3:61:c5 brd ff:ff:ff:ff:ff:ff 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:12:30:f2 brd ff:ff:ff:ff:ff:ff inet 10.60.70.121/24 brd 10.60.70.255 scope global br0 inet6 fe80::20c:29ff:fe12:30f2/64 scope link valid_lft forever preferred_lft forever 12: vethT6BGL2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fe:a1:69:af:50:17 brd ff:ff:ff:ff:ff:ff inet6 fe80::fca1:69ff:feaf:5017/64 scope link valid_lft forever preferred_lft forever #> brctl show bridge name bridge id STP enabled interfaces br0 8000.000c291230f2 no eth0 vethT6BGL2 pan0 8000.000000000000 no #> cat /proc/sys/net/ipv4/ip_forward 1 # Generated by iptables-save v1.4.7 on Fri Jul 11 15:11:36 2014 *nat :PREROUTING ACCEPT [34:6287] :POSTROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Jul 11 15:11:36 2014 Network Config (Container) #> cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp #> ip a s 11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 02:69:fb:42:ee:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::69:fbff:fe42:eed7/64 scope link valid_lft forever preferred_lft forever 13: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • Wireshark does not see interfaces (winXP)

    - by bua
    Short story: Wireshark is working....on my winXP-32b ... usage .... Long long time later Wireshark does not work It can't find any usefull interface (just VPN) ipconfig /all Ethernet adapter Wireless Network Connection: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Dell Wireless 1490 Dual Band WLAN Mini-Card Physical Address. . . . . . . . . : SOME VALID MAC Ethernet adapter eth0: Connection-specific DNS Suffix . : xxxx Description . . . . . . . . . . . : Broadcom 440x 10/100 Integrated Controller Physical Address. . . . . . . . . : SOME VALID MAC Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.12.68 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168..... ..... Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Fortinet virtual adapter Physical Address. . . . . . . . . : SOME VALID MAC Following steps didn't help: Several Wireshark re-installation Several LIBPCAP re installation SP3 for winXP Any ideas welcome.

    Read the article

  • Uncompiled WCF on IIS7: The type could not be found

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

  • How to run Spring 3.0 PetClinic in tomcat with Hibernate backed JPA

    - by Zwei Steinen
    OK, this probably is supposed to be the easiest thing in the world, but I've been trying for the entire day, and it's still not working.. Any help is highly appreciated! What I did: Downloaded Tomcat 6.0.26 & Spring 3.0.1 Downloaded PetClinic from https://src.springframework.org/svn/spring-samples/petclinic Built & deployed petclinic.war. Ran fine with default TopLink persistence. Edited webapps/WEB-INF/spring/applicationContext-jpa.xml and changed jpaVendorAdaptor from TopLink to Hibernate. Edited webapps/WEB-INF/web.xml and changed context-param from applicationContext-jdbc.xml to applicationContext-jpa.xml Copied everything in the Spring 3.0.1 distribution to TOMCAT_HOME/lib. Launched tomcat. Saw Caused by: java.lang.IllegalStateException: ClassLoader [org.apache.catalina.loader.WebappClassLoader] does NOT provide an 'addTransformer(ClassFileTransformer)' method. Specify a custom LoadTimeWeaver or start your Java virtual machine with Spring's agent: -javaagent:spring-agent.jar Uncommented line <Loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader"/> in webapps/META-INF/context.xml. Same error. Added that line to TOMCAT_HOME/context.xml Deployed without error. However, when I do something it will issue an error saying java.lang.NoClassDefFoundError: javax/transaction/SystemException at org.hibernate.ejb.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:39) at org.hibernate.ejb.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:34) at org.springframework.orm.jpa.JpaTransactionManager.createEntityManagerForTransaction(JpaTransactionManager.java:400) at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:321) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:336) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:102) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy34.findOwners(Unknown Source) at org.springframework.samples.petclinic.web.FindOwnersForm.processSubmit(FindOwnersForm.java:56) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doInvokeMethod(HandlerMethodInvoker.java:710) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:167) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:414) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:402) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:771) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:552) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:71) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassNotFoundException: javax.transaction.SystemException at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1516) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1361) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 41 more I feel silly.. What am I missing?

    Read the article

  • System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse request failed with HTTP status 40

    - by John Galt
    I am trying to make some enhancements to a production web app. After quite a bit of unit testing on my WinXP IIS 5.1 development machine, everything works on my localhost so I used the Visual Studio 2008 PUBLISH dialog on my Dev PC to push the following projects to a staging server: the primary web app the "primary" webservice (the home page tries to invoke this WS) a "secondary" webservice (not yet a problem because home page does not invoke this WS) I get the following when I try to browse to the home page of the web app typing this into my browser: link text Server Error in '/zVersion2' Application. The request failed with HTTP status 404: Not Found. Description: An unhandled exception occurred during the execution of the current web request.Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.WebException: The request failed with HTTP status 404: Not Found. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [WebException: The request failed with HTTP status 404: Not Found.] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431289 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204 ProxyZipeeeService.WSZipeee.Zipeee.GetMessageByType(Int32 iMsgType) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\ProxyZipeeeService\ProxyZipeeeService\Web References\WSZipeee\Reference.vb:2168 Zipeee.frmZipeee.LoadMessage() in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:43 Zipeee.frmZipeee.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:33 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Version Information: Microsoft .NET Framework Version:2.0.50727.3607; ASP.NET Version:2.0.50727.3082 Here is a bit of the corresponding source code: Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load If Not Page.IsPostBack Then LoadMessage() End If End Sub Private Sub LoadMessage() Dim iCnt As Integer Dim iValue As Integer dsStandardMsg = wsZipeee.GetMessageByType(BizConstants.MsgType.Standard) End Sub I suspect I may have configured things incorrectly on the staging server. The staging server is Win Server 2003 ServicePack 2 running IIS 6.0. When I published the primary site and the 2 webservices on the staging server called MOJITO I created the physical directories for each on the D drive. Then using INETMGR, I configured the following virtual directories: zVersion2 zVersion2wsSQL zVersion2wsEmergency All of the above are configured to use a new application pool I setup and named zVersion2aspNet20. The default web site for this machine MOJITO is configured to use ASP.NET 1.1 and the IP address is set to (All Unassigned). The production versions of the latter 2 webservices run on the MOJITO machine (named ZipeeeService and EmergencyService respectively). Can my staging versions of the above webservices (named zVersion2wsSQL and zVersion2wsEmergency respectively) co-exist on the same web server with the same IP address? Please note that when I test the zVersion2wsSQL webservice independently (from INETMGR right-mouse and click Browse) it works as expected (i.e. presenting all the methods of the webservice) like this snippet: GetMessageByType MessageName="Get_x0020_Message_x0020_By_x0020_Type" I can test this webmethod by clicking on it and it presents the Test dialog (because it takes a simple datatype and I am invoking it on localhost (i.e. MOJITO): **Get Message By Type** **Test** To test the operation using the HTTP POST protocol, click the 'Invoke' button. Parameter Value iMsgType: _______ [INVOKE button] SOAP 1.1 ....etc. I fear I may have rambled with too much information so I will stop but I hope someone can help me as I cannot understand why this request results in a "not found". Thanks.

    Read the article

  • Delphi - WndProc() in thread never called

    - by Robert Oschler
    I had code that worked fine when running in the context of the main VCL thread. This code allocated it's own WndProc() in order to handle SendMessage() calls. I am now trying to move it to a background thread because I am concerned that the SendMessage() traffic is affecting the main VCL thread adversely. So I created a worker thread with the sole purpose of allocating the WndProc() in its thread Execute() method to ensure that the WndProc() existed in the thread's execution context. The WndProc() handles the SendMessage() calls as they come in. The problem is that the worker thread's WndProc() method is never triggered. Note, doExecute() is part of a template method that is called by my TThreadExtended class which is a descendant of Delphi's TThread. TThreadExtended implements the thread Execute() method and calls doExecute() in a loop. I triple-checked and doExecute() is being called repeatedly. Also note that I call PeekMessage() right after I create the WndProc() in order to make sure that Windows creates a message queue for the thread. However something I am doing is wrong since the WndProc() method is never triggered. Here's the code below: // ========= BEGIN: CLASS - TWorkerThread ======================== constructor TWorkerThread.Create; begin FWndProcHandle := 0; inherited Create(false); end; // --------------------------------------------------------------- // This call is the thread's Execute() method. procedure TWorkerThread.doExecute; var Msg: TMsg; begin // Create the WndProc() in our thread's context. if FWndProcHandle = 0 then begin FWndProcHandle := AllocateHWND(WndProc); // Call PeekMessage() to make sure we have a window queue. PeekMessage(Msg, FWndProcHandle, 0, 0, PM_NOREMOVE); end; if Self.Terminated then begin // Get rid of the WndProc(). myDeallocateHWnd(FWndProcHandle); end; // Sleep a bit to avoid hogging the CPU. Sleep(5); end; // --------------------------------------------------------------- procedure TWorkerThread.WndProc(Var Msg: TMessage); begin // THIS CODE IS NEVER CALLED. try if Msg.Msg = WM_COPYDATA then begin // Is LParam assigned? if (Msg.LParam > 0) then begin // Yes. Treat it as a copy data structure. with PCopyDataStruct(Msg.LParam)^ do begin ... // Here is where I do my work. end; end; // if Assigned(Msg.LParam) then end; // if Msg.Msg = WM_COPYDATA then finally Msg.Result := 1; end; // try() end; // --------------------------------------------------------------- procedure TWorkerThread.myDeallocateHWnd(Wnd: HWND); var Instance: Pointer; begin Instance := Pointer(GetWindowLong(Wnd, GWL_WNDPROC)); if Instance <> @DefWindowProc then begin // Restore the default windows procedure before freeing memory. SetWindowLong(Wnd, GWL_WNDPROC, Longint(@DefWindowProc)); FreeObjectInstance(Instance); end; DestroyWindow(Wnd); end; // --------------------------------------------------------------- // ========= END : CLASS - TWorkerThread ======================== Thanks, Robert

    Read the article

  • Loading PNGs into OpenGL performance issues - Java & JOGL much slower than C# & Tao.OpenGL

    - by Edward Cresswell
    I am noticing a large performance difference between Java & JOGL and C# & Tao.OpenGL when both loading PNGs from storage into memory, and when loading that BufferedImage (java) or Bitmap (C# - both are PNGs on hard drive) 'into' OpenGL. This difference is quite large, so I assumed I was doing something wrong, however after quite a lot of searching and trying different loading techniques I've been unable to reduce this difference. With Java I get an image loaded in 248ms and loaded into OpenGL in 728ms The same on C# takes 54ms to load the image, and 34ms to load/create texture. The image in question above is a PNG containing transparency, sized 7200x255, used for a 2D animated sprite. I realise the size is really quite ridiculous and am considering cutting up the sprite, however the large difference is still there (and confusing). On the Java side the code looks like this: BufferedImage image = ImageIO.read(new File(fileName)); texture = TextureIO.newTexture(image, false); texture.setTexParameteri(GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR); texture.setTexParameteri(GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR); The C# code uses: Bitmap t = new Bitmap(fileName); t.RotateFlip(RotateFlipType.RotateNoneFlipY); Rectangle r = new Rectangle(0, 0, t.Width, t.Height); BitmapData bd = t.LockBits(r, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb); Gl.glBindTexture(Gl.GL_TEXTURE_2D, tID); Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_RGBA, t.Width, t.Height, 0, Gl.GL_BGRA, Gl.GL_UNSIGNED_BYTE, bd.Scan0); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_LINEAR); Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR); t.UnlockBits(bd); t.Dispose(); After quite a lot of testing I can only come to the conclusion that Java/JOGL is just slower here - PNG reading might not be as quick, or that I'm still doing something wrong. Thanks. Edit2: I have found that creating a new BufferedImage with format TYPE_INT_ARGB_PRE decreases OpenGL texture load time by almost half - this includes having to create the new BufferedImage, getting the Graphics2D from it and then rendering the previously loaded image to it. Edit3: Benchmark results for 5 variations. I wrote a small benchmarking tool, the following results come from loading a set of 33 pngs, most are very wide, 5 times. testStart: ImageIO.read(file) -> TextureIO.newTexture(image) result: avg = 10250ms, total = 51251 testStart: ImageIO.read(bis) -> TextureIO.newTexture(image) result: avg = 10029ms, total = 50147 testStart: ImageIO.read(file) -> TextureIO.newTexture(argbImage) result: avg = 5343ms, total = 26717 testStart: ImageIO.read(bis) -> TextureIO.newTexture(argbImage) result: avg = 5534ms, total = 27673 testStart: TextureIO.newTexture(file) result: avg = 10395ms, total = 51979 ImageIO.read(bis) refers to the technique described in James Branigan's answer below. argbImage refers to the technique described in my previous edit: img = ImageIO.read(file); argbImg = new BufferedImage(img.getWidth(), img.getHeight(), TYPE_INT_ARGB_PRE); g = argbImg.createGraphics(); g.drawImage(img, 0, 0, null); texture = TextureIO.newTexture(argbImg, false); Any more methods of loading (either images from file, or images to OpenGL) would be appreciated, I will update these benchmarks.

    Read the article

  • WCF Data Service BeginSaveChanges not saving changes in Silverlight app

    - by Enigmativity
    I'm having a hell of a time getting WCF Data Services to work within Silverlight. I'm using the VS2010 RC. I've struggled with the cross domain issue requiring the use of clientaccesspolicy.xml & crossdomain.xml files in the web server root folder, but I just couldn't get this to work. I've resorted to putting both the Silverlight Web App & the WCF Data Service in the same project to get past this issue, but any advice here would be good. But now that I can actually see my data coming from the database and being displayed in a data grid within Silverlight I thought my troubles were over - but no. I can edit the data and the in-memory entity is changing, but when I call BeginSaveChanges (with the appropriate async EndSaveChangescall) I get no errors, but no data updates in the database. Here's my WCF Data Services code: public class MyDataService : DataService<MyEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } protected override void OnStartProcessingRequest(ProcessRequestArgs args) { base.OnStartProcessingRequest(args); HttpContext context = HttpContext.Current; HttpCachePolicy c = HttpContext.Current.Response.Cache; c.SetCacheability(HttpCacheability.ServerAndPrivate); c.SetExpires(HttpContext.Current.Timestamp.AddSeconds(60)); c.VaryByHeaders["Accept"] = true; c.VaryByHeaders["Accept-Charset"] = true; c.VaryByHeaders["Accept-Encoding"] = true; c.VaryByParams["*"] = true; } } I've pinched the OnStartProcessingRequest code from Scott Hanselman's article Creating an OData API for StackOverflow including XML and JSON in 30 minutes. Here's my code from my Silverlight app: private MyEntities _wcfDataServicesEntities; private CollectionViewSource _customersViewSource; private ObservableCollection<Customer> _customers; private void UserControl_Loaded(object sender, RoutedEventArgs e) { if (!System.ComponentModel.DesignerProperties.GetIsInDesignMode(this)) { _wcfDataServicesEntities = new MyEntities(new Uri("http://localhost:7156/MyDataService.svc/")); _customersViewSource = this.Resources["customersViewSource"] as CollectionViewSource; DataServiceQuery<Customer> query = _wcfDataServicesEntities.Customer; query.BeginExecute(result => { _customers = new ObservableCollection<Customer>(); Array.ForEach(query.EndExecute(result).ToArray(), _customers.Add); Dispatcher.BeginInvoke(() => { _customersViewSource.Source = _customers; }); }, null); } } private void button1_Click(object sender, RoutedEventArgs e) { _wcfDataServicesEntities.BeginSaveChanges(r => { var response = _wcfDataServicesEntities.EndSaveChanges(r); string[] results = new[] { response.BatchStatusCode.ToString(), response.IsBatchResponse.ToString() }; _customers[0].FinAssistCompanyName = String.Join("|", results); }, null); } The response string I get back data binds to my grid OK and shows "-1|False". My intent is to get a proof-of-concept working here and then do the appropriate separation of concerns to turn this into a simple line-of-business app. I've spent hours and hours on this. I'm being driven insane. Any ideas how to get this working?

    Read the article

  • Class Issue (The type XXX is already defined )

    - by Android Stack
    I have listview app exploring cities each row point to diffrent city , in each city activity include button when clicked open new activity which is infinite gallery contains pics of that city , i add infinite gallery to first city and work fine , when i want to add it to the second city , it gave me red mark error in the class as follow : 1- The type InfiniteGalleryAdapter is already defined. 2-The type InfiniteGallery is already defined. i tried to change class name with the same result ,i delete R.jave and eclipse rebuild it with same result also i uncheck the java builder from project properties ,get same red mark error. please any help and advice will be appreciated thanks My Code : public class SecondCity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Boolean customTitleSupported = requestWindowFeature(Window.FEATURE_CUSTOM_TITLE); // Set the layout to use setContentView(R.layout.main); if (customTitleSupported) { getWindow().setFeatureInt(Window.FEATURE_CUSTOM_TITLE,R.layout.custom_title); TextView tv = (TextView) findViewById(R.id.tv); Typeface face=Typeface.createFromAsset(getAssets(),"BFantezy.ttf"); tv.setTypeface(face); tv.setText("MY PICTURES"); } InfiniteGallery galleryOne = (InfiniteGallery) findViewById(R.id.galleryOne); galleryOne.setAdapter(new InfiniteGalleryAdapter(this)); } } class InfiniteGalleryAdapter extends BaseAdapter { **//red mark here (InfiniteGalleryAdapter)** private Context mContext; public InfiniteGalleryAdapter(Context c, int[] imageIds) { this.mContext = c; } public int getCount() { return Integer.MAX_VALUE; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } private LayoutInflater inflater=null; public InfiniteGalleryAdapter(Context a) { this.mContext = a; inflater = (LayoutInflater)mContext.getSystemService ( Context.LAYOUT_INFLATER_SERVICE); } public class ViewHolder{ public TextView text; public ImageView image; } private int[] images = { R.drawable.pic_1, R.drawable.pic_2, R.drawable.pic_3, R.drawable.pic_4, R.drawable.pic_5 }; private String[] name = { "This is first picture (1) " , "This is second picture (2)", "This is third picture (3)", "This is fourth picture (4)", " This is fifth picture (5)", }; public View getView(int position, View convertView, ViewGroup parent) { ImageView i = getImageView(); int itemPos = (position % images.length); try { i.setImageResource(images[itemPos]); ((BitmapDrawable) i.getDrawable()). setAntiAlias (true); } catch (OutOfMemoryError e) { Log.e("InfiniteGalleryAdapter", "Out of memory creating imageview. Using empty view.", e); } View vi=convertView; ViewHolder holder; if(convertView==null){ vi = inflater.inflate(R.layout.gallery_items, null); holder=new ViewHolder(); holder.text=(TextView)vi.findViewById(R.id.textView1); holder.image=(ImageView)vi.findViewById(R.id.image); vi.setTag(holder); } else holder=(ViewHolder)vi.getTag(); holder.text.setText(name[itemPos]); final int stub_id=images[itemPos]; holder.image.setImageResource(stub_id); return vi; } private ImageView getImageView() { ImageView i = new ImageView(mContext); return i; } } class InfiniteGallery extends Gallery { **//red mark here (InfiniteGallery)** public InfiniteGallery(Context context) { super(context); init(); } public InfiniteGallery(Context context, AttributeSet attrs) { super(context, attrs); init(); } public InfiniteGallery(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); init(); } private void init(){ // These are just to make it look pretty setSpacing(25); setHorizontalFadingEdgeEnabled(false); } public void setResourceImages(int[] name){ setAdapter(new InfiniteGalleryAdapter(getContext(), name)); setSelection((getCount() / 2)); } }

    Read the article

  • I created a custom (WPF) DataGridBoundColumn and get unexpected behaviour, what am I missing?

    - by aspic
    Hi, I am using a DataGrid (from Microsoft.Windows.Controls.DataGrid) to display items on and on this DataGrid I use a custom Column which extends DataGridBoundColumn. I have bound an ObservableCollection to the ItemSource of the DataGrid. Conversation is one of my own custom datatypes which a (among other things) has a boolean called active. I bound this boolean to the DataGrid as follows: DataGridActiveImageColumn test = new DataGridActiveImageColumn(); test.Header = "Active"; Binding binding1 = new Binding("Active"); test.Binding = binding1; ConversationsDataGrid.Columns.Add(test); My custom DataGridBoundColumn DataGridActiveImageColumn overrides the GenerateElement method to let it return an Image depending on whether the conversation it is called for is active or not. The code for this is: namespace Microsoft.Windows.Controls { class DataGridActiveImageColumn : DataGridBoundColumn { protected override FrameworkElement GenerateElement(DataGridCell cell, object dataItem) { // Create Image Element Image myImage = new Image(); myImage.Width = 10; bool active=false; if (dataItem is Conversation) { Conversation c = (Conversation)dataItem; active = c.Active; } BitmapImage myBitmapImage = new BitmapImage(); // BitmapImage.UriSource must be in a BeginInit/EndInit block myBitmapImage.BeginInit(); if (active) { myBitmapImage.UriSource = new Uri(@"images\active.png", UriKind.Relative); } else { myBitmapImage.UriSource = new Uri(@"images\inactive.png", UriKind.Relative); } // To save significant application memory, set the DecodePixelWidth or // DecodePixelHeight of the BitmapImage value of the image source to the desired // height or width of the rendered image. If you don't do this, the application will // cache the image as though it were rendered as its normal size rather then just // the size that is displayed. // Note: In order to preserve aspect ratio, set DecodePixelWidth // or DecodePixelHeight but not both. myBitmapImage.DecodePixelWidth = 10; myBitmapImage.EndInit(); myImage.Source = myBitmapImage; return myImage; } protected override FrameworkElement GenerateEditingElement(DataGridCell cell, object dataItem) { throw new NotImplementedException(); } } } All this works as expected, and when during the running of the program the active boolean of conversations changes, this is automatically updated in the DataGrid. However: When there are more entries on the DataGrid then fit at any one time (and vertical scrollbars are added) the behavior with respect to the column for all the conversations is strange. The conversations that are initially loaded are correct, but when I use the scrollbar of the DataGrid conversations that enter the view seems to have a random status (although more inactive than active ones, which corresponds to the actual ratio). When I scroll back up, the active images of the Conversations initially shown (before scrolling) are not correct anymore as well. When I replace my custom DataGridBoundColumn class with (for instance) DataGridCheckBoxColumn it works as intended so my extension of the DataGridBoundColumn class must be incomplete. Personally I think it has something to do with the following: From the MSDN page on the GenerateElement method (http://msdn.microsoft.com/en-us/library/system.windows.controls.datagridcolumn.generateelement%28VS.95%29.aspx): Return Value Type: System.Windows. FrameworkElement A new, read-only element that is bound to the column's Binding property value. I do return a new element (the image) but it is not bound to anything. I am not quite sure what I should do. Should I bind the Image to something? To what exactly? And why? (I have been experimenting, but was unsuccessful thus far, hence this post) Thanks in advance.

    Read the article

  • .NET SerialPort DataReceived event not firing

    - by Klay
    I have a WPF test app for evaluating event-based serial port communication (vs. polling the serial port). The problem is that the DataReceived event doesn't seem to be firing at all. I have a very basic WPF form with a TextBox for user input, a TextBlock for output, and a button to write the input to the serial port. Here's the code: public partial class Window1 : Window { SerialPort port; public Window1() { InitializeComponent(); port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); } void port_DataReceived(object sender, SerialDataReceivedEventArgs e) { Debug.Print("receiving!"); string data = port.ReadExisting(); Debug.Print(data); outputText.Text = data; } private void Button_Click(object sender, RoutedEventArgs e) { Debug.Print("sending: " + inputText.Text); port.WriteLine(inputText.Text); } } Now, here are the complicating factors: The laptop I'm working on has no serial ports, so I'm using a piece of software called Virtual Serial Port Emulator to setup a COM2. VSPE has worked admirably in the past, and it's not clear why it would only malfunction with .NET's SerialPort class, but I mention it just in case. When I hit the button on my form to send the data, my Hyperterminal window (connected on COM2) shows that the data is getting through. Yes, I disconnect Hyperterminal when I want to test my form's ability to read the port. I've tried opening the port before wiring up the event. No change. I've read through another post here where someone else is having a similar problem. None of that info has helped me in this case. EDIT: Here's the console version (modified from http://mark.michaelis.net/Blog/TheBasicsOfSystemIOPortsSerialPort.aspx): class Program { static SerialPort port; static void Main(string[] args) { port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); string text; do { text = Console.ReadLine(); port.Write(text + "\r\n"); } while (text.ToLower() != "q"); } public static void port_DataReceived(object sender, SerialDataReceivedEventArgs args) { string text = port.ReadExisting(); Console.WriteLine("received: " + text); } } This should eliminate any concern that it's a Threading issue (I think). This doesn't work either. Again, Hyperterminal reports the data sent through the port, but the console app doesn't seem to fire the DataReceived event. EDIT #2: I realized that I had two separate apps that should both send and receive from the serial port, so I decided to try running them simultaneously... If I type into the console app, the WPF app DataReceived event fires, with the expected threading error (which I know how to deal with). If I type into the WPF app, the console app DataReceived event fires, and it echoes the data. I'm guessing the issue is somewhere in my use of the VSPE software, which is set up to treat one serial port as both input and output. And through some weirdness of the SerialPort class, one instance of a serial port can't be both the sender and receiver. Anyway, I think it's solved.

    Read the article

  • iPhone crash on CoreData save

    - by davetron5000
    This is a different situation than this question, as the solution provided doesn't work and the stack is different. Periodical crash when I save data using coredata. The stack trace isn't 100% clear on where this is happening, but I'm certain it's this routine that's being called. It's either the save: in this method or the one following. Code: -(void)saveWine { if ([self validInfo]) { Wine *wine = (Wine *)wineToEdit; if (wine == nil) { wine = (Wine *)[NSEntityDescription insertNewObjectForEntityForName:@"Wine" inManagedObjectContext:self.managedObjectContext]; } wine.uuid = [Utils createUUID]; wine.name = self.wineNameField.text; wine.vineyard = self.vineyardField.text; wine.vintage = [[self numberFormatter] numberFromString:self.vintageField.text]; wine.timeStamp = [NSDate date]; wine.rating = [NSNumber numberWithInt:self.ratingControl.selectedSegmentIndex]; wine.partnerRating = [NSNumber numberWithInt:self.partnerRatingControl.selectedSegmentIndex]; wine.varietal = self.currentVarietal; wine.tastingNotes = self.currentTastingNotes; wine.dateTasted = self.currentDateTasted; wine.tastingLocation = [[ReferenceDataAccessor defaultReferenceDataAccessor] addEntityForType:TASTING_LOCATION withName:self.currentWhereTasted]; id type = [[ReferenceDataAccessor defaultReferenceDataAccessor] entityForType:WINE_TYPE withOrdinal:self.typeControl.selectedSegmentIndex]; wine.type = type; NSError *error; NSLog(@"Saving %@",wine); if (![self.managedObjectContext save:&error]) { [Utils showAlertMessage:@"There was a problem saving your wine; try restarting the app" withTitle:@"Problem saving"]; NSLog(@"Error while saving new wine %@, %@", error, [error userInfo]); } } else { NSLog(@"ERROR - someone is calling saveWine with invalid info!!"); } } Code for addEntityForType:withName:: -(id)addEntityForType:(NSString *)type withName:(NSString *)name { if ([Utils isStringBlank:name]) { return nil; } id existing = [[ReferenceDataAccessor defaultReferenceDataAccessor] entityForType:type withName:name]; if (existing != nil) { NSLog(@"%@ with name %@ already exists",type,name); return existing; } NSManagedObject *newEntity = [NSEntityDescription insertNewObjectForEntityForName:type inManagedObjectContext:self.managedObjectContext]; [newEntity setValue:name forKey:@"name"]; NSError *error; if (![self.managedObjectContext save:&error]) { [Utils showAlertMessage:[NSString stringWithFormat:@"There was a problem saving a %@",type] withTitle:@"Database Probem"]; [Utils logErrorFully:error forOperation:[NSString stringWithFormat:@"saving new %@",type ]]; } return newEntity; } Stack trace: Thread 0 Crashed: 0 libSystem.B.dylib 0x311de2d4 __kill + 8 1 libSystem.B.dylib 0x311de2c4 kill + 4 2 libSystem.B.dylib 0x311de2b6 raise + 10 3 libSystem.B.dylib 0x311f2d72 abort + 50 4 libstdc++.6.dylib 0x301dea20 __gnu_cxx::__verbose_terminate_handler() + 376 5 libobjc.A.dylib 0x319a2594 _objc_terminate + 104 6 libstdc++.6.dylib 0x301dcdf2 __cxxabiv1::__terminate(void (*)()) + 46 7 libstdc++.6.dylib 0x301dce46 std::terminate() + 10 8 libstdc++.6.dylib 0x301dcf16 __cxa_throw + 78 9 libobjc.A.dylib 0x319a14c4 objc_exception_throw + 64 10 CoreData 0x3526e83e -[NSManagedObjectContext save:] + 1098 11 Wine Brain 0x0000651e 0x1000 + 21790 12 Wine Brain 0x0000525c 0x1000 + 16988 13 Wine Brain 0x00004894 0x1000 + 14484 14 Wine Brain 0x00008716 0x1000 + 30486 15 CoreFoundation 0x31477fe6 -[NSObject(NSObject) performSelector:withObject:withObject:] + 18 16 UIKit 0x338c14a6 -[UIApplication sendAction:to:from:forEvent:] + 78 17 UIKit 0x3395c7ae -[UIBarButtonItem(UIInternal) _sendAction:withEvent:] + 86 18 CoreFoundation 0x31477fe6 -[NSObject(NSObject) performSelector:withObject:withObject:] + 18 19 UIKit 0x338c14a6 -[UIApplication sendAction:to:from:forEvent:] + 78 20 UIKit 0x338c1446 -[UIApplication sendAction:toTarget:fromSender:forEvent:] + 26 21 UIKit 0x338c1418 -[UIControl sendAction:to:forEvent:] + 32 22 UIKit 0x338c116a -[UIControl(Internal) _sendActionsForEvents:withEvent:] + 350 23 UIKit 0x338c19c8 -[UIControl touchesEnded:withEvent:] + 336 24 UIKit 0x338b734e -[UIWindow _sendTouchesForEvent:] + 362 25 UIKit 0x338b6cc8 -[UIWindow sendEvent:] + 256 26 UIKit 0x338a1fc0 -[UIApplication sendEvent:] + 292 27 UIKit 0x338a1900 _UIApplicationHandleEvent + 5084 28 GraphicsServices 0x35d66efc PurpleEventCallback + 660 29 CoreFoundation 0x314656f8 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 20 30 CoreFoundation 0x314656bc __CFRunLoopDoSource1 + 160 31 CoreFoundation 0x31457f76 __CFRunLoopRun + 514 32 CoreFoundation 0x31457c80 CFRunLoopRunSpecific + 224 33 CoreFoundation 0x31457b88 CFRunLoopRunInMode + 52 34 GraphicsServices 0x35d664a4 GSEventRunModal + 108 35 GraphicsServices 0x35d66550 GSEventRun + 56 36 UIKit 0x338d5322 -[UIApplication _run] + 406 37 UIKit 0x338d2e8c UIApplicationMain + 664 38 Wine Brain 0x000021ba 0x1000 + 4538 39 Wine Brain 0x00002184 0x1000 + 4484 I have no idea why my app's memory locations aren't being symbolocated, but the code paths lead to only two manavedObjectContext save: calls. The time this happend, addEntityForType was called all the way through, creating a new object for the "whereTasted" entity, before the final save: on the entire wine object. When I go through the same procedure again, it works fine. This leads me to believe it's something to do with the app having been run for a while when adding a new location, but I'm not sure. Any thoughts on how I can debug this and get more info the next time this happens?

    Read the article

  • Grails - Simple hasMany Problem - Using CheckBoxes rather than HTML Select in create.gsp

    - by gav
    My problem is this: I want to create a grails domain instance, defining the 'Many' instances of another domain that it has. I have the actual source in a Google Code Project but the following should illustrate the problem. class Person { String name static hasMany[skills:Skill] static constraints = { id (visible:false) skills (nullable:false, blank:false) } } class Skill { String name String description static constraints = { id (visible:false) name (nullable:false, blank:false) description (nullable:false, blank:false) } } If you use this model and def scaffold for the two Controllers then you end up with a form like this that doesn't work; My own attempt to get this to work enumerates the Skills as checkboxes and looks like this; But when I save the Volunteer the skills are null! This is the code for my save method; def save = { log.info "Saving: " + params.toString() def skills = params.skills log.info "Skills: " + skills def volunteerInstance = new Volunteer(params) log.info volunteerInstance if (volunteerInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'volunteer.label', default: 'Volunteer'), volunteerInstance.id])}" redirect(action: "show", id: volunteerInstance.id) log.info volunteerInstance } else { render(view: "create", model: [volunteerInstance: volunteerInstance]) } } This is my log output (I have custom toString() methods); 2010-05-10 21:06:41,494 [http-8080-3] INFO bumbumtrain.VolunteerController - Saving: ["skills":["1", "2"], "name":"Ian", "_skills":["", ""], "create":"Create", "action":"save", "controller":"volunteer"] 2010-05-10 21:06:41,495 [http-8080-3] INFO bumbumtrain.VolunteerController - Skills: [1, 2] 2010-05-10 21:06:41,508 [http-8080-3] INFO bumbumtrain.VolunteerController - Volunteer[ id: null | Name: Ian | Skills [Skill[ id: 1 | Name: Carpenter ] , Skill[ id: 2 | Name: Sound Engineer ] ]] Note that in the final log line the right Skills have been picked up and are part of the object instance. When the volunteer is saved the 'Skills' are ignored and not commited to the database despite the in memory version created clearly does have the items. Is it not possible to pass the Skills at construction time? There must be a way round this? I need a single form to allow a person to register but I want to normalise the data so that I can add more skills at a later time. If you think this should 'just work' then a link to a working example would be great. If I use the HTML Select then it works fine! Such as the following to make the Create page; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:select name="skills" from="${uk.co.bumbumtrain.Skill.list()}" multiple="yes" optionKey="id" size="5" value="${volunteerInstance?.skills}" /> </td> </tr> But I need it to work with checkboxes like this; <tr class="prop"> <td valign="top" class="name"> <label for="skills"><g:message code="volunteer.skills.label" default="Skills" /></label> </td> <td valign="top" class="value ${hasErrors(bean: volunteerInstance, field: 'skills', 'errors')}"> <g:each in="${skillInstanceList}" status="i" var="skillInstance"> <label for="${skillInstance?.name}"><g:message code="${skillInstance?.name}.label" default="${skillInstance?.name}" /></label> <g:checkBox name="skills" value="${skillInstance?.id.toString()}"/> </g:each> </td> </tr> The log output is exactly the same! With both style of form the Volunteer instance is created with the Skills correctly referenced in the 'Skills' variable. When saving, the latter fails with a null reference exception as shown at the top of this question. Hope this makes sense, thanks in advance! Gav

    Read the article

  • no namenode error in pseudo-mode

    - by Anshu Basia
    I'm new to hadoop and is in learning phase. As per Hadoop Definitve guide, i have set up my hadoop in pseudo distributed mode and everything was working fine. I was even able to execute all the examples from chapter 3 yesterday. Today, when i rebooted my unix and tried to run start-dfs.sh and then tried http://localhost/50070....it is showing error and when i try to stop dfs (stop-dfs.sh) it says no namenode to stop. I have been googling the issue but no result. Also, when i format my namenode again...everything starts working fine and i'm able to connect to the localhost/50070 and even replicate files and directories in hdfs but as soon as i restart my linux and try to connect to hdfs the same problem comes up. Below is the error log: ************************************************************/ 2011-06-22 15:45:55,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ubuntu/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.203.0 STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011 ************************************************************/ 2011-06-22 15:45:56,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2011-06-22 15:45:57,007 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries 2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=anshu 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2011-06-22 15:45:57,868 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2011-06-22 15:45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times **2011-06-22 15:45:58,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anshu/dfs/name does not exist. 2011-06-22 15:45:58,827 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.h**adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,828 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162) 2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/ Any help is appreciated Thank-you

    Read the article

  • Trouble determining proper decoding of a REST response from an ArcGIS REST service using IHttpModule

    - by Ryan Taylor
    First a little background on what I am trying to achieve. I have an application that is utilizing REST services served by ArcGIS Server and IIS7. The REST services return data in one of several different formats. I am requesting a JSON response. I want to be able to modify the response (remove or add parameters) before the response is sent to the client. However, I am having difficulty converting the stream to a string that I can modify. To that end, I have implemented the following code in order to try to inspect the stream. SecureModule.cs using System; using System.Web; namespace SecureModuleTest { public class SecureModule : IHttpModule { public void Init(HttpApplication context) { context.BeginRequest += new EventHandler(OnBeginRequest); } public void Dispose() { } public void OnBeginRequest(object sender, EventArgs e) { HttpApplication application = (HttpApplication) sender; HttpContext context = application.Context; HttpRequest request = context.Request; HttpResponse response = context.Response; response.Filter = new ServicesFilter(response.Filter); } } } ServicesFilter.cs using System; using System.IO; using System.Text; namespace SecureModuleTest { class ServicesFilter : MemoryStream { private readonly Stream _outputStream; private StringBuilder _content; public ServicesFilter(Stream output) { _outputStream = output; _content = new StringBuilder(); } public override void Write(byte[] buffer, int offset, int count) { _content.Append(Encoding.UTF8.GetString(buffer, offset, count)); using (TextWriter textWriter = new StreamWriter(@"C:\temp\content.txt", true)) { textWriter.WriteLine(String.Format("Buffer: {0}", _content.ToString())); textWriter.WriteLine(String.Format("Length: {0}", buffer.Length)); textWriter.WriteLine(String.Format("Offset: {0}", offset)); textWriter.WriteLine(String.Format("Count: {0}", count)); textWriter.WriteLine(""); textWriter.Close(); } // Modify response _outputStream.Write(buffer, offset, count); } } } The module is installed in the /ArcGIS/rest/ virtual directory and is executed via the following GET request. http://localhost/ArcGIS/rest/services/?f=json&pretty=true The web page displays the expected response, however, the text file tells a very different (encoded?) story. Expect Response {"currentVersion" : "10.0", "folders" : [], "services" : [ ] } Text File Contents Buffer: ? ?`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q? Length: 4096 Offset: 0 Count: 168 Buffer: ? ?`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q?K???!P Length: 4096 Offset: 0 Count: 11 Interestingly, Fiddler depicts a similar picture. Fiddler Request GET http://localhost/ArcGIS/rest/services/?f=json&pretty=true HTTP/1.1 Host: localhost Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.70 Safari/533.4 Referer: http://localhost/ArcGIS/rest/services Cache-Control: no-cache Pragma: no-cache Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: a=mWz_JFOusuGPnS3w5xx1BSUuyKGB3YZo92Dy2SUntP2MFWa8MaVq6a4I_IYBLKuefXDZANQMeqvxdGBgQoqTKz__V5EQLHwxmKlUNsaK7do. Fiddler Response - Before Clicking Decode HTTP/1.1 200 OK Content-Type: text/plain;charset=utf-8 Content-Encoding: gzip ETag: 719143506 Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Thu, 10 Jun 2010 01:08:43 GMT Content-Length: 179 ????????`I?%&/m?{J?J??t??`$?@??????iG#)?*??eVe]f@????{???{???;?N'????\fdl??J??!????~|?"~?G?u]???'?)??G?????G??7N????W??{?????,??|?OR????q?K???! P??? Fiddler Response - After Clicking Decode HTTP/1.1 200 OK Content-Type: text/plain;charset=utf-8 ETag: 719143506 Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Thu, 10 Jun 2010 01:08:43 GMT Content-Length: 80 {"currentVersion" : "10.0", "folders" : [], "services" : [ ] } I think that the problem may be a result of compression and/or chunking of data (this might be why I am receiving two calls to ServicesFilter.Write(...), however, I have not yet been able to solve the issue. How might I decode, unzip, and otherwise convert the byte stream into the string I know it should be for modification by my filter?

    Read the article

< Previous Page | 813 814 815 816 817 818 819 820 821 822 823 824  | Next Page >