Search Results

Search found 21369 results on 855 pages for '128 bit'.

Page 170/855 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Why is my Linux box dropping network connection? [closed]

    - by Robo
    I have a Debian server in the form of a Raspberry Pi running Raspian. It has a USB Wi-Fi connection. Sometimes it would not respond when I SSH to it, and would require a reboot. I found something in syslog that may indicate what the problem is, can someone help with what this means? Dec 16 15:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 16:17:01 raspberrypi /USR/SBIN/CRON[2109]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 16:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 17:17:01 raspberrypi /USR/SBIN/CRON[2127]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 17:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 18:17:01 raspberrypi /USR/SBIN/CRON[2142]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 18:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 19:17:01 raspberrypi /USR/SBIN/CRON[2161]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 19:31:29 raspberrypi kernel: [16615.391509] ieee80211 phy0: wlan0: No probe response from AP 00:21:29:6c:5c:3d after 500ms, disconnecting. Dec 16 19:31:29 raspberrypi wpa_supplicant[1501]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:21:29:6c:5c:3d reason=4 Dec 16 19:31:29 raspberrypi kernel: [16615.416189] cfg80211: Calling CRDA to update world regulatory domain Dec 16 19:31:30 raspberrypi ifplugd(wlan0)[1444]: Link beat lost. Dec 16 19:31:40 raspberrypi ifplugd(wlan0)[1444]: Executing '/etc/ifplugd/ifplugd.action wlan0 down'. Dec 16 19:31:40 raspberrypi wpa_supplicant[1501]: wlan0: CTRL-EVENT-TERMINATING - signal 15 received Dec 16 19:31:40 raspberrypi ifplugd(wlan0)[1444]: Program executed successfully. Dec 16 19:31:42 raspberrypi ntpd[1928]: Deleting interface #2 wlan0, 192.168.1.10#123, interface stats: received=321, sent=327, dropped=0, active_time=16596 secs Dec 16 19:31:42 raspberrypi ntpd[1928]: 202.6.116.123 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 203.99.128.34 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 203.118.148.40 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 202.89.49.65 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: peers refreshed

    Read the article

  • Convert mkv/h264 video so it can be played on a "mid-range" Sony Ericsson phone. (using Ubuntu).

    - by Johan
    Hi As a little experiment I thinking of converting some video/movies/tv-series into a format that could be playable on my K850, but to be a little bit more generic in this question let's say "mid range Sony Ericsson" phone since they all more or less behave the same and has the same screen resolution (240 x 320). I am looking for command line based tools (for Ubuntu), since I am thinking about writing a "convert and move" script later if it is successful. A lot of the video I have is encoded in mkv/h264, but since that is not supported by the phone I guess that I need to convert it into some mp4/mpeg4 low quality video. After some googling it seems like a good candidate for the job is ffmpeg, but that seems to be a very versatile tool with a lot of magic tricks. Am I on the right track? And if so how do I use ffmpeg to do this? Thanks Johan Update: After plating a little bit with ffmeg I noticed that it only uses 1 of my 4 cores, so the transcoding takes forever. I found a arg called -threads but that did not change much, maybe I got it wrong. I also found that something like this plays in the phone. ffmpeg -i Mythbusters\ S1D1_1.mkv -threads 4 -t 180 -vcodec mpeg4 -r 15 -s 320x240 Mythbusters\ S1D1_1_mini.mp4 It was possible to use 3gp/h263, but the quality was really useless. ffmpeg -i Mythbusters\ S1D1_1.mkv -t 180 -vcodec h263 -acodec libfaac -s cif Mythbusters\ S1D1_1_cif.3gp And it seems like mp4/h264 is also possible and the result is ok, thanks to this question, this one seem to use more than one core as well so it was a little bit faster for me. ffmpeg -i Mythbusters_S1D1_1.mkv -t 180 -acodec libfaac -ab 60k -s 320x240 -vcodec libx264 -b 500k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -flags2 +mixed_refs -me_method umh -subq 6 -trellis 1 -refs 5 -coder 0 -me_range 16 -g 250 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 500k -maxrate 768k -bufsize 2M -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 13 -threads 0 -f mp4 Mythbusters_S1D1_1_qvga.mp4 Update: I have tried to use HandBrakeCLI and it is no problem creating a new file that seem to be the same as the one created with ffmpeg with something like this. HandBrakeCLI -i Mythbusters_S1D1_1.mkv --size 100 -E faac -B 60 --maxHeight 240 -r 15 -e x264 -o Mythbusters_S1D1_1_hand.mp4 But that one did not play in the phone... I found this in the official manual: If you transfer video clips using another program than Media Go™, we recommend that you select H.264 Baseline profile video, up to QVGA at 30 fps, VBR 384 kbps (max 768 kps) with AAC+ audio at 128 kbps (max 255 kbps), 48 kHz and stereo audio in mp4 file format. So the idea to use H264 seems to be correct.

    Read the article

  • ubuntu eth0 not reconnecting after cable unplugged

    - by Alex
    I'm running kubuntu 9.10 w/ gnome, I have a static IP defined in /etc/network/interfaces When I unplugged my network cable and rebooted, then reconnected the network cable I was not able to connect. I tried using sudo ifup eth0, and then ifconfig and it seemed as though the IP address had been assigned and I was connected, but I wasn't. I then did ifdown eth0, and again ifup eth0. For some reason I'm not able to access the network. Furthermore, I also attempted to connect via wlan, and was able to connect to the wireless network, but cannot "see" the network. I can't transfer data or access the internet or anything on the network including the router. How do I resolve this? topsy@monolyth:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:25:1c:df:70 inet addr:192.168.1.145 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:25ff:fe1c:df70/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5720 errors:0 dropped:0 overruns:0 frame:0 TX packets:565 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:378035 (378.0 KB) TX bytes:46832 (46.8 KB) Memory:fe000000-fe020000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) By access the network I mean the local network as well as the internet. topsy@monolyth:~$ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=9.14 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=1.24 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=1.01 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=1.00 ms [snip... all OK, icmp_seq from 5-30, time between 0.981-1.25ms] ^C --- 192.168.1.1 ping statistics --- 30 packets transmitted, 30 received, 0% packet loss, time 29035ms rtt min/avg/max/mdev = 0.971/1.300/9.140/1.458 ms topsy@monolyth:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 root@monolyth:~# cat /etc/resolv.conf # Generated by NetworkManager

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Postfix: Relay access denied

    - by Joseph Silvashy
    When I telnet to my server thats running postfix and try to send an email: MAIL FROM:<[email protected]> #=> 250 2.1.0 Ok RCPT TO:<[email protected]> #=> 554 5.7.1 <[email protected]>: Relay access denied I couldn't really find the answer on the site or by looking at other users question/answers, I'm not sure where to start. Ideas? Update So basically looking at the docs: http://www.postfix.org/SMTPD_ACCESS_README.html (section: Getting selective with SMTP access restriction lists), I don't seem to have any of those directives in etc/postfix/main.cf like smtpd_client_restrictions = permit_mynetworks, reject or any of the other ones, so I'm quite confused. But really I'm going to have a rails app connect to the server and send the emails, so I'm not sure how to handle it. Here is what my config file looks like: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = rerecipe-utils alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = $myhostname, localhost.$mydomain, localhost, mail.rerecipe.com, rerecipe.com relayhost = mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all mynetworks = 127.0.0.0/8 204.232.207.0/24 10.177.64.0/19 [::1]/128 [fe80::%eth0]/64 [fe80::%eth1]/64 Something to note is that relayhost is blank, this is the default configuration file that was created when I installed Postfix, when testing to connect with openssl I get this: ~% openssl s_client -connect mail.myhostname.com:25 -starttls smtp CONNECTED(00000003) depth=0 /CN=myhostname verify error:num=18:self signed certificate verify return:1 depth=0 /CN=myhostname verify return:1 --- Certificate chain 0 s:/CN=myhostname i:/CN=myhostname --- Server certificate -----BEGIN CERTIFICATE----- MIIBqTCCARICCQDDxVr+420qvjANBgkqhkiG9w0BAQUFADAZMRcwFQYDVQQDEw5y ZXJlY2lwZS11dGlsczAeFw0xMDEwMTMwNjU1MTVaFw0yMDEwMTAwNjU1MTVaMBkx FzAVBgNVBAMTDnJlcmVjaXBlLXV0aWxzMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCB iQKBgQDODh2w4A1k0qiPNPhkrPj8sfkxpKPTk28AuZhgOEBYBLeHacTKNH0jXxPv P3TyhINijvvdDPzyuPJoTTliR2EHR/nL4DLhr5FzhV+PB4PsIFUER7arx+1sMjz6 5l/Ubu1ppMzW9U0IFNbaPm2AiiGBQRCQN8L0bLUjzVzwoSRMOQIDAQABMA0GCSqG SIb3DQEBBQUAA4GBALi2vvk9TGKJubXYJbU0PKmVmsfzFK35yLqr0keiDBhK2Leg 274sWxEH3ds8mUaRftuFlXb7RYAGNlVyTuMTY3CEcnqIsH7F2McCUTpjMzu/o1mZ O/B21CelKetBd1u79Gkrv2vWyN7Csft6uTx5NIGG2+pGi3r0gX2r0Hbu2K94 -----END CERTIFICATE----- subject=/CN=myhostname issuer=/CN=myhostname --- No client certificate CA names sent --- SSL handshake has read 1203 bytes and written 360 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 1AA4B8BFAAA85DA9ED4755194C50311670E57C35B8C51F9C2749936DA11918E4 Session-ID-ctx: Master-Key: 9B432F1DE9F3580DCC6208C76F96631DC5A4BC517BDBADD5F514414DCF34AC526C30687B96C5C4742E9583555A118232 Key-Arg : None Start Time: 1292985376 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN Oddly enough when I try to send an email from the machine itself it does work: echo test | mail -s "test subject" [email protected]

    Read the article

  • Why does the .NET tab in the 'Add Reference' dialog in Visual Studio not list the contents of the GA

    - by abroun
    Duplicate of: Getting assemblies to show in the .NET tab of Add Reference So, I'm using Visual C# 2008 Express Edition and I've just been on a bit of a detour as I found out that my assumption that the .NET tab of the 'Add Reference' dialog lists the contents of the GAC was incorrect. This was a bit of a problem for me as the assembly that I wanted to reference from my project was only available in the GAC. (It was Microsoft.XNA.Framework v2.0 obtained from the XNA 2.0 redistributalbe and as far as I could see it installed only into the GAC). I worked round the problem by setting the reference to Microsoft.XNA.Framework manually in the .csproj file and then getting a copy of the dll out of the cache. I was then able to create a directory for the DLL, add it to Visual Studio's list of assembly directories in the registry and then voila! I could see it in the .NET tab. This all seems like a bit of a faff to me and I don't think that my initial assumption (that the .NET tab shows the contents of the GAC) was that unreasonable or would be that uncommon. Can someone who knows more than me tell me why the contents of the GAC aren't shown? The documentation just says that they are not, but is there a good reason? is there actually a way to get the entire contents of the GAC to be listed? A tick box I've missed somewhere? Any info much appreciated.

    Read the article

  • How do I use Perl's WWW::Facebook::API to publish to a user's newsfeed?

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • Pseudo code for instruction description

    - by Claus
    Hi, I am just trying to fiddle around what is the best and shortest way to describe two simple instructions with C-like pseudo code. The extract instruction is defined as follows: extract rd, rs, imm This instruction extracts the appropriate byte from the 32-bit source register rs and right justifies it in the destination register. The byte is specified by imm and thus can take the values 0 (for the least-significant byte) and 3 (for the most-significant byte). rd = 0x0; // zero-extend result, ie to make sure bits 31 to 8 are set to zero in the result rd = (rs && (0xff << imm)) >> imm; // this extracts the approriate byte and stores it in rd The insert instruction can be regarded as the inverse operation and it takes a right justified byte from the source register rs and deposits it in the appropriate byte of the destination register rd; again, this byte is determined by the value of imm tmp = 0x0 XOR (rs << imm)) // shift the byte to the appropriate byte determined by imm rd = (rd && (0x00 << imm)) // set appropriate byte to zero in rd rd = rd XOR tmp // XOR the byte into the destination register This looks all a bit horrible, so I wonder if there is a little bit a more elegant way to describe this bahaviour in C-like style ;) Many thanks, Claus

    Read the article

  • Apache2 & .htaccess : Apache ignoring AccessFile

    - by Elyx0
    Hi there here is my server configuration: DEBIAN 32Bits / PHP 5 / Apache Server version: Apache/2.2.3 - Server built: Mar 22 2008 09:29:10 The AccessFiles : grep -ni AccessFileName * apache2.conf:134:AccessFileName .htaccess apache2.conf:667:AccessFileName .httpdoverride All the AllowOverride statements in my apache2/ folder. mods-available/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-available/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit mods-enabled/userdir.conf:6: AllowOverride Indexes AuthConfig Limit mods-enabled/userdir.conf:16: AllowOverride FileInfo AuthConfig Limit sites-enabled/default:8: AllowOverride All sites-enabled/default:14: AllowOverride All sites-enabled/default:19: AllowOverride All sites-enabled/default:24: AllowOverride All sites-enabled/default:42: AllowOverride All The sites-enabled/default file : 1 <VirtualHost *> 2 ServerAdmin [email protected] 3 ServerName mysite.com 4 ServerAlias mysite.com 5 DocumentRoot /var/www/mysite.com/ 6 <Directory /> 7 Options FollowSymLinks 8 AllowOverride All 9 Order Deny,Allow 10 Deny from all 11 </Directory> 12 <Directory /var/www/mysite.com/> 13 Options Indexes FollowSymLinks MultiViews 14 AllowOverride All 15 Order allow,deny 16 allow from all 17 </Directory> 18 <Directory /var/www/mysite.com/test/> 19 AllowOverride All 20 </Directory> 21 22 ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ 23 <Directory "/usr/lib/cgi-bin"> 24 AllowOverride All 25 Options ExecCGI -MultiViews +SymLinksIfOwnerMatch 26 Order allow,deny 27 Allow from all 28 </Directory> 29 30 ErrorLog /var/log/apache2/error.log 31 32 # Possible values include: debug, info, notice, warn, error, crit, 33 # alert, emerg. 34 LogLevel warn 35 36 CustomLog /var/log/apache2/access.log combined 37 ServerSignature Off 38 39 Alias /doc/ "/usr/share/doc/" 40 <Directory "/usr/share/doc/"> 41 Options Indexes MultiViews FollowSymLinks 42 AllowOverride All 43 Order deny,allow 44 Deny from all 45 Allow from 127.0.0.0/255.0.0.0 ::1/128 46 </Directory> 47 48 49 50 51 52 53 54 </VirtualHost> If i change any Allow from all in Deny from all , it works whenever i put it. I've got one .htaccess at /mysite.com/.htaccess & one at /mysite.com/test/.htaccess with: Order Deny,Allow Deny from all Neither of them work i can still see my website. I've got mod_rewrite enabled but i don't think it does anything here. I've tried almost everything :/ It works on my local environnement (MAMP) but fails when on my Debian server.

    Read the article

  • How to make local drive available in apache localhost

    - by Ronald Allan
    How can I make my "Drive D:" "Drive E" available in localhost. I'm running apache on my backtrack machine. My default is /var/www/. Every directory I created inside the /var/www/ is available and all working fine. Let's say I created /var/www/PENTEST/ the contents of that PENTEST directory can be accessed through: localhost/PENTEST/ How can I make this work: localhost/media/DATA/ The /media/DATA/ is my DRIVE D: I edited this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Still not working. I'm getting 404. # # I figured it out. Thank for the post of "RiggsFolly" which can be found here: http://forum.wampserver.com/read.php?2,89163. I just have to change this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Into this: ServerAdmin webmaster@localhost DocumentRoot D:/media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory D:/media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory>

    Read the article

  • Authenticated WCF: Getting the Current Security Context

    - by bradhe
    I have the following scenario: I have various user's data stored in my database. This data was entered via a web app. We'd like to expose this data back to the user over a web service so that they can integrate their data with their applications. We would also like to expose some business logic over these services. As such we do not want to use OData. This is a multi-tenant application so I only want to expose their data back to them and not other users. Likewise, the business logic we expose should be relative to the authenticated user. I would like let the user use an OASIS scheme to authenticate with the web service -- WCF already allows for this out of the box as far as I understand -- or perhaps we can issue them certificates to authenticate with. That bit hasn't really been worked out yet. Here is a bit of pseudo-code of how I envision this would work within the service: function GetUsersData(id) var user := Lookup User based on Username from Auth Context var data := Get Data From Repository based on "user" return data end function For the business logic scenario I think it would look something like this: function PerformBusinessLogic(someData) var user := Lookup User based on Username from Auth Context var returnValue := Perform some logic based on supplied data return returnValue end function The hard bit here is getting the current username (or cert info in the cert scenario) that the user authenticated with! Does WCF even enable this scenario? If not would WSE3 enable this? Thanks,

    Read the article

  • Working with mongodb from Java

    - by demas
    I have launch mongodb server: [[email protected]][~]% mongod --dbpatmongod --dbpath /home/demas/temp/ Mon Apr 19 09:44:18 Mongo DB : starting : pid = 4538 port = 27017 dbpath = /home/demas/temp/ master = 0 slave = 0 32-bit ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data ** see http://blog.mongodb.org/post/137788967/32-bit-limitations for more Mon Apr 19 09:44:18 db version v1.4.0, pdfile version 4.5 Mon Apr 19 09:44:18 git version: nogitversion Mon Apr 19 09:44:18 sys info: Linux arch.local.net 2.6.33-ARCH #1 SMP PREEMPT Mon Apr 5 05:57:38 UTC 2010 i686 BOOST_LIB_VERSION=1_41 Mon Apr 19 09:44:18 waiting for connections on port 27017 Mon Apr 19 09:44:18 web admin interface listening on port 28017 I have created documents by console client: [[email protected]][~]% mongo MongoDB shell version: 1.4.0 url: test connecting to: test type "help" for help > db.some.find(); { "_id" : ObjectId("4bcbef3c3be43e9b7e04ef3d"), "name" : "mongo" } { "_id" : ObjectId("4bcbef423be43e9b7e04ef3e"), "x" : 3 } Now I am trying to work with MongoDb from Java: import com.mongodb.*; import java.net.UnknownHostException; public class test1 { public static void main(String[] args) { System.out.println("Start"); try { Mongo m = new Mongo("localhost", 27017); DB db = m.getDB("test"); DBCollection coll = db.getCollection("some"); coll.insert(makeDocument(10, "James", "male")); System.out.println("Finish"); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (MongoException ex) { ex.printStackTrace(); } } public static BasicDBObject makeDocument(int id, String name, String gender) { BasicDBObject doc = new BasicDBObject(); doc.put("id", id); doc.put("name", name); doc.put("gender", gender); return doc; } } But execution stops on line coll.insert(): [[email protected]][~/dev/study/java/mongodb]% javac test1.java [[email protected]][~/dev/study/java/mongodb]% java test1 Start There are not messages from mogodb server regarding accepted connection. Why?

    Read the article

  • why unsigned int 0xFFFFFFFF is equal to int -1?

    - by conejoroy
    perhaps it's a very stupid question but I'm having a hard time figuring this out =) in C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see http://stackoverflow.com/questions/1420982/invalid-value-for-sizet Why?? I'm confused.. I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold. then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF? I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why. thanks!

    Read the article

  • Help with animating an element in jQuery

    - by alex
    I have an unordered list with a few list elements. #tags { width: 300px; height: 300px; position: relative; border: 1px solid red; list-style: none none; } #tags li { position: absolute; background: gray; } I have also started writing a jQuery plugin to animate the list elements. So far, I place the list elements randomly in the parent container, and choose a random font size for the text. My next step (which I am a bit stuck on) is to animate the list elements... essentially, I want the list elements to do something like this Slide from left to right whilst getting slightly larger up to the middle and then dropping in size back to normal when it hits the right border. Then, it should do the same in reverse, however it should also set a negative 'z-index' and maybe fade in opacity a bit. The first bit I'm really stuck on, is how to determine if the element is near the middle, in a way that I can have a value that starts at 0.1 on the far left hand size and is 1 in the middle and then back to 0.1 on the far right hand size. Basically, I want them to appear as if they are going around in a faux 3D circle into the page. Then I could do something like this $(this).css({ fontSize: percentageTowardsMiddle * 14, }); Do you know how I could do this? Thanks

    Read the article

  • Fast sign in C++ float...are there any platform dependencies in this code?

    - by Patrick Niedzielski
    Searching online, I have found the following routine for calculating the sign of a float in IEEE format. This could easily be extended to a double, too. // returns 1.0f for positive floats, -1.0f for negative floats, 0.0f for zero inline float fast_sign(float f) { if (((int&)f & 0x7FFFFFFF)==0) return 0.f; // test exponent & mantissa bits: is input zero? else { float r = 1.0f; (int&)r |= ((int&)f & 0x80000000); // mask sign bit in f, set it in r if necessary return r; } } (Source: ``Fast sign for 32 bit floats'', Peter Schoffhauzer) I am weary to use this routine, though, because of the bit binary operations. I need my code to work on machines with different byte orders, but I am not sure how much of this the IEEE standard specifies, as I couldn't find the most recent version, published this year. Can someone tell me if this will work, regardless of the byte order of the machine? Thanks, Patrick

    Read the article

  • Notepad++: Adding color highlighting for PHP functions?

    - by Jebego
    I've recently begun using Notepad++, and have found a part of its styling functionality that confuses me. I'm currently attempting to color all of PHP's defined functions (such as count(), strlen(), etc.). In the Settings-Style Configurator, you cannot add a new style for such a function list. Instead, I have begun editing the stylers.xml and langs.xml. To add the new coloring, in langs.xml, I've modified the php section to the following: <Language name="php" ext="php php3 phtml" commentLine="//" commentStart="/*" commentEnd="*/"> <Keywords name="instre1">[default keywords]</Keywords> <Keywords name="instre2">[my function list]</Keywords> </Language> The [default keywords] and [my function list] are replaced with wordlists. I've also edited the php section in stylers.xml to look like the following: <LexerType name="php" desc="php" ext=""> <WordsStyle name="QUESTION MARK" styleID="18" fgColor="FF0000" bgColor="FDF8E3" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="DEFAULT" styleID="118" fgColor="000000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="STRING" styleID="119" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="STRING VARIABLE" styleID="126" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="1" fontSize="" /> <WordsStyle name="SIMPLESTRING" styleID="120" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="WORD" styleID="121" fgColor="008040" bgColor="FEFCF5" fontName="" fontStyle="1" fontSize="" keywordClass="instre1">True False</WordsStyle> <WordsStyle name="NUMBER" styleID="122" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="VARIABLE" styleID="123" fgColor="0080FF" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="COMMENT" styleID="124" fgColor="FF8040" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="COMMENTLINE" styleID="125" fgColor="FF8040" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="OPERATOR" styleID="127" fgColor="8000FF" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="FUNCTIONS" styleID="128" fgColor="000080" bgColor="FEFCF5" fontName="" fontStyle="1" fontSize="" keywordClass="instre2"></WordsStyle> </LexerType> The changed part is the last "FUNCTIONS" line. When I restart Notepad++ and go into the Settings-Style Configurator section, under the php language, the FUNCTIONS style exists. I can change the style's color, and can see the entire keyword list under 'Default Keywords'. However, it is not changing the coloring of the words in my code. When I edit the WORD style, which contains stuff like 'if', 'and', and 'true', things change accordingly in my code. Any ideas on how to make this work?

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • How to convert a printer driver to a stand-alone console application which can generate a printer fi

    - by Thorbjørn Ravn Andersen
    I have a situation where the only way to generate a certain datafile is to print it manually to FILE: under Windows and save it in a file for further processing. I would really like to have a small stand-alone program which embeds this binary printer driver so I can run it from a batch file and have it generate that binary file for me, as we can then fully automate the "save file in Visio, 'print' it and upload it to the final destination and trigger a remote test". Is this possible with a suitable Windows SDK? I am a Java programmer, so I do not know Visual Studio and the possibilities with MSDN - yet! - but I'd appreciate pointers. EDIT: I have the installation files for that printer driver, both 32 and 64 bit. Older versions may include a 16 bit driver. EDIT: The "print to FILE:" functionality is just what was recommended by the documentation. I have played a little bit with using the LPR-protocol to see what it can do. I'd still prefer the "invoke small binary" approach.

    Read the article

  • Show image from clipboard to defalut imageviewer of windows using c#.net

    - by Rajesh Rolen- DotNet Developer
    I am using below function to make a image of current form and set it in clipboard Image bit = new Bitmap(this.Width, this.Height); Graphics gs = Graphics.FromImage(bit); gs.CopyFromScreen(this.Location, new Point(0, 0), bit.Size); Guid guid = System.Guid.NewGuid(); string FileName = guid.ToString(); //Copy that image in the clipbaord. Image imgToCopy = Image.FromFile(Path.Combine(Environment.CurrentDirectory, FileName + ".jpg")); Clipboard.SetImage(imgToCopy); Now my image is in clipboard and i am able to show it in picturebox on other form using below code : mypicturebox.Image = Clipboard.GetImage(); Now the the problem is that i want to show it in default imageviewer of that system. so for that i think using "System.Diagnostics.Process.Start" we can do that.. but i dont know, how to find default imageviewer and how to set clipboard's image in that ... please help me out... if i find solution than thats good otherwise i am thinking to save that file from clipboard to harddisk and then view it in window's default imageviewer... please help me to resolve my problem.. i am using c#.net

    Read the article

  • How to determine if a registry key is redirected by WOW64?

    - by Luke
    Is it possible to determine whether or not a given registry key is redirected? My problem is that I want to enumerate registry keys in both the 32-bit and 64-bit registry views in a generic manner from a 32-bit application. I could simply open each key twice, first with KEY_WOW64_64KEY and then with KEY_WOW64_32KEY. However, if the key is not redirected this gives you exactly the same key and you end up enumerating the exact same content twice; this is what I am trying to avoid. I did find some documentation on it, but it looks like the only way is to examine the hive and do a bunch of string comparisons on the key. Another possibility I thought of is to try to open Wow6432Node on each subkey; if it exists then the key must be redirected. I.e. if I am trying to open HKCU\Software\Microsoft\Windows I would try to open the following keys: HKCU\Wow6432Node, HKCU\Software\Wow6432Node, HKCU\Software\Microsoft\Wow6432Node, and HKCU\Software\Microsoft\Windows\Wow6432Node. Unfortunately, the documentation seems to imply that a child of a redirected key is not necessarily redirected so that route also has issues. So, what are my options here?

    Read the article

  • Intel Assembly Programming

    - by Kay
    class MyString{ char buf[100]; int len; boolean append(MyString str){ int k; if(this.len + str.len>100){ for(k=0; k<str.len; k++){ this.buf[this.len] = str.buf[k]; this.len ++; } return false; } return true; } } Does the above translate to: start: push ebp ; save calling ebp mov ebp, esp ; setup new ebp push esi ; push ebx ; mov esi, [ebp + 8] ; esi = 'this' mov ebx, [ebp + 14] ; ebx = str mov ecx, 0 ; k=0 mov edx, [esi + 200] ; edx = this.len append: cmp edx + [ebx + 200], 100 jle ret_true ; if (this.len + str.len)<= 100 then ret_true cmp ecx, edx jge ret_false ; if k >= str.len then ret_false mov [esi + edx], [ebx + 2*ecx] ; this.buf[this.len] = str.buf[k] inc edx ; this.len++ aux: inc ecx ; k++ jmp append ret_true: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret true ret_false: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret false My greatest difficulty here is figuring out what to push onto the stack and the math for pointers. NOTE: I'm not allowed to use global variables and i must assume 32-bit ints, 16-bit chars and 8-bit booleans.

    Read the article

  • With CentOS 6 and LXC, "ifconfig" is unable to see network interface (but busybox "ifconfig" works fine)

    - by larsks
    I've just started working with LXC under CentOS 6 (via the libvirt adapter). If I create an LXC container, I'm unable to see any network interfaces when using the native system tools: # ifconfig -a # The behavior is very odd; specifying an interface by names yields neither the expected output nor an error message. This is true even for clearly invalid interface names, like this: # ifconfig foo # The ip command exhibits the same behavior. On the other hand, if I use "ifconfig" provided by busybox, everything works as expected: # busybox ifconfig -a eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:268 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17814 (17.3 KiB) TX bytes:552 (552.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So...what does busybox know that the native tools don't? The libvirt config for this environment is pretty standard; the network definition looks like this: <interface type='network'> <mac address='52:54:00:e0:12:c8'/> <source network='default'/> <target dev='veth0'/> </interface> The full configuration is here if you think it might help. I'm running: lxc-0.7.2-2.el6.x86_64 kernel-2.6.32-71.29.1.el6.x86_64 EDIT Weirder and weirder...it's a display issue, not a functionality issue. I can see the output of ifconfig if I pipe it into anything, so for example: # ifconfig eth0 | cat eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet addr:192.168.10.10 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:573 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37914 (37.0 KiB) TX bytes:552 (552.0 b) And in fact even when not piping the output, strace shows that ifconfig is in fact writing the output to file descriptor 1 (aka stdout), so it's not clear why no output is actually showing up. This could be either an LXC or a virsh issue, I guess.

    Read the article

  • OpenSL ES decode 24bit FLAC

    - by yano
    I am trying to decode a FLAC file with 24bit sample format using OpenSL ES on Android. Originally, I had my SLDataFormat_PCM for the SLDataSink setup like this. _pcm.formatType = SL_DATAFORMAT_PCM; _pcm.numChannels = 2; _pcm.samplesPerSec = SL_SAMPLINGRATE_44_1; _pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; _pcm.endianness = SL_BYTEORDER_LITTLEENDIAN; This is working well for basically any data format. Luckily the samplesPerSec is not respected (I don't want resampling). Now I want to support the full bit-depth of a FLAC file with 24bit samples. When using this format, it apparently performs a bit-depth conversion, because once I load the file, and then check the ANDROID_KEY_PCMFORMAT_BITSPERSAMPLE info, it is 16. When I put bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_24; or SL_PCMSAMPLEFORMAT_FIXED_32, then OpenSL ES rejects it E/libOpenSLES(22706): pAudioSnk: bitsPerSample=32 W/libOpenSLES(22706): Leaving Engine::CreateAudioPlayer (SL_RESULT_CONTENT_UNSUPPORTED) Any idea how this is meant to work? Is Android currently restricted to 16 bit int only? I would also accept 32bit float, but I don't suppose that will work either.

    Read the article

  • flashcache with mdadm and LVM

    - by Backtogeek
    I am having trouble setting up flashcache on a system with LVM and mdadm, I suspect I am either just missing an obvious step or getting some mapping wrong and hoped someone could point me in the right direction? system info: CentOS 6.4 64 bit mdadm config md0 : active raid1 sdd3[2] sde3[3] sdf3[4] sdg3[5] sdh3[1] sda3[0] 204736 blocks super 1.0 [6/6] [UUUUUU] md2 : active raid6 sdd5[2] sde5[3] sdf5[4] sdg5[5] sdh5[1] sda5[0] 3794905088 blocks super 1.1 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md3 : active raid0 sdc1[1] sdb1[0] 250065920 blocks super 1.1 512k chunks md1 : active raid10 sdh1[1] sda1[0] sdd1[2] sdf1[4] sdg1[5] sde1[3] 76749312 blocks super 1.1 512K chunks 2 near-copies [6/6] [UUUUUU] pcsvan PV /dev/mapper/ssdcache VG Xenvol lvm2 [3.53 TiB / 3.53 TiB free] Total: 1 [3.53 TiB] / in use: 1 [3.53 TiB] / in no VG: 0 [0 ] flashcache create command used: flashcache_create -p back ssdcache /dev/md3 /dev/md2 pvdisplay --- Physical volume --- PV Name /dev/mapper/ssdcache VG Name Xenvol PV Size 3.53 TiB / not usable 106.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 28952 Free PE 28912 Allocated PE 40 PV UUID w0ENVR-EjvO-gAZ8-TQA1-5wYu-ISOk-pJv7LV vgdisplay --- Volume group --- VG Name Xenvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.53 TiB PE Size 128.00 MiB Total PE 28952 Alloc PE / Size 40 / 5.00 GiB Free PE / Size 28912 / 3.53 TiB VG UUID 7vfKWh-ENPb-P8dV-jVlb-kP0o-1dDd-N8zzYj So that is where I am at, I thought that was the job done however when creating a logical volume called test and mounting it is /mnt/test the sequential write is pathetic, 60 ish MB/s /dev/md3 has 2 x SSD's in Raid0 which alone is performing at around 800 MB/s sequential write and I am trying to cache /dev/md2 which is 6 x 1TB drives in raid6 I have read a number of pages through the day and some of them here, it is obvious from the results that the cache is not functioning but I am unsure why. I have added the filter line in the lvm.conf filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/md3|" ] It is probably something silly but the cache is clearly performing no writes so I suspect I am not mapping it or have not mounted the cache correctly. dmsetup status ssdcache: 0 7589810176 flashcache stats: reads(142), writes(0) read hits(133), read hit percent(93) write hits(0) write hit percent(0) dirty write hits(0) dirty write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) metadata dirties(0), metadata cleans(0) metadata batch(0) metadata ssd writes(0) cleanings(0) fallow cleanings(0) no room(0) front merge(0) back merge(0) force_clean_block(0) disk reads(9), disk writes(0) ssd reads(133) ssd writes(9) uncached reads(0), uncached writes(0), uncached IO requeue(0) disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0) lru hot blocks(31136000), lru warm blocks(31136000) lru promotions(0), lru demotions(0) Xenvol-test: 0 10485760 linear I have included as much info as I can think of, look forward to any replies.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >