Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 653/1523 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Editing .bash_profile file not taking effect

    - by Sandeepan Nath
    I need to put export PATH=$PATH:/opt/lampp/bin to my ~/.bash_profile file so that mysql from command line works on my system. Please check mysql command line not working for further details on that. I am working on a fedora system and logged in as root user. If I run locate .bash_profile then I get these:- /etc/skel/.bash_profile /home/sam/.bash_profile /home/sohil/.bash_profile /home/windows/.bash_profile /root/.bash_profile So, I modified the /root/.bash_profile file like this:- from PATH=$PATH:$HOME/bin export PATH to PATH=$PATH:/opt/lampp/bin export PATH But, still the change is not taking effect - Opening a new console and running mysql again says bash: mysql: command not found. However running export PATH=$PATH:/opt/lampp/bin in console makes it work for that session. So, I am doing something wrong with the .bash_profile file. May be editing incorrect one or doing the edit incorrectly.

    Read the article

  • KVM Guest not reachable from host

    - by Paul
    Hello, I'm running Ubuntu server 9.10, installed KVM etc. Created the bridge network following instructions on help.ubuntu.com/community/KVM/Networking Created a windows 2008 guest using virt-install command line (using virt-manager GUI from a remote Ubuntu desktop would not let me select the ISO location). I can however use a remote virt-manager to connect to the guest and complete the windows install. Within windows 2008 I changed the IP address but cannot ping from outside world. The bridge network appears fine - I'm not sure what else to look at! Here is the interfaces file: The loopback network interface auto lo iface lo inet loopback The primary network interface auto eth0 iface eth0 inet manual # auto br0 iface br0 inet static address 60.234.64.50 netmask 255.255.255.248 network 60.234.0.0 broadcast 60.234.0.255 gateway 60.234.64.49 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth1 iface eth1 inet static address 192.168.12.2 netmask 255.255.255.0 broadcast 192.168.12.255 The ip of the windows server is 60.234.64.52 What else should I check? Regards Paul.

    Read the article

  • Ubuntu: Multiple NICs, one used only for Wake-On-LAN

    - by jcwx86
    This is similar to some other questions, but I have a specific need which is not covered in the other questions. I have an Ubuntu server (11.10) with two NICs. One is built into the motherboard and the other is a PCI express card. I want to have my server connected to the internet via my NAT router and also have it able to wake from suspend using a Magic Packet (henceforth referred to as Wake-On-LAN, WOL). I can't do this with just one of the NICs because each has an issue - the built-in NIC will crash the system if it is placed under heavy load (typically downloading data), whilst the PCI express NIC will crash the system if it is used for WOL. I have spent some time investigating these individual problems, to no avail. My plan is thus: use the built-in NIC solely for WOL, and use the PCI express card for all other network communication except WOL. Since I send the WOL Magic Packet to a specific MAC address, there is no danger of hitting the wrong NIC, but there is a danger of using the built-in NIC for general network access, overloading it and crashing the system. Both NICs are wired to the same LAN with address space 192.168.0.0/24. The built-in ethernet card is set to have interface name eth1 and the PCI express card is eth0 in Ubuntu's udev persistent rules (so they stay the same upon reboot). I have been trying to set this up with the /etc/network/interfaces file. Here is where I am currently: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 auto eth1 iface eth1 inet static address 192.168.0.254 netmask 255.255.255.0 I think by not specifying a gateway for eth1, I prevent it being used for outgoing requests. I don't mind if it can be reached on 192.168.0.254 on the LAN, i.e. via SSH -- it's IP is irrelevant to WOL, which is based on MAC addresses -- I just don't want it to be used to access internet resources. My kernel routing table (from route -n) is Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 My question is this: Is this sufficient for what I want to achieve? My research has thrown up the idea of using static routing to specify that eth1 should only be used for WOL on the local network, but I'm not sure this is necessary. I have been monitoring the activity of the interfaces using iptraf and it seems like eth0 takes the vast majority of the packets, though I am not sure that this will be consistent based on my configuration. Given that if I mess up the configuration, my system will likely crash, it is important to me to have this set up correctly!

    Read the article

  • What is the difference between sar -B verses sar -W

    - by Mark
    I am trying to understand why my system is running slowly. I found the sar command, but wanted to know the difference between sar -B and sar -W I read the man page, and I understand that -B gives me the paging statistics and -W gives me the swapping statistics. What I would like to understand is the following: What is the correlation between the two sets of statistics. When should I be concerned about -B and when about -W? ie, what values from each command should I be concerned with? Which statistic is more closely related to system performance Thanks

    Read the article

  • Post compiled php 5.4 curl installation

    - by user140657
    I recently compiled php 5.4 from source. I have Centos 6. I used this configuration: # ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql # make # make install # cp php.ini-dist /usr/local/lib/php.ini I realize now that I do not have cURL installed. I don't know how to install cURL after a compiled installation of php. Using yum install php-curl installs cURL for php 5.3. I tried this already with an apache restart and it did not show up on my phpinfo file. How do I install cURL under these circumstances?

    Read the article

  • Disable MOUSE wakeup when doing suspend on UBUNTU

    - by Shadyabhi
    When I do SUSPEND on ubuntu, in order to wake up, i can just move the mouse and the computer will wake up. But, I dont want that the computer is waked up when I move my mouse. How can I do that? My /proc/acpi/wakeup file:- shadyabhi@shadyabhi-desktop:~$ cat /proc/acpi/wakeup Device S-state Status Sysfs node SLPB S4 *enabled P32 S4 disabled pci:0000:00:1e.0 UAR1 S4 disabled pnp:00:09 ILAN S4 disabled pci:0000:00:19.0 PEGP S4 disabled PEX0 S4 disabled pci:0000:00:1c.0 PEX1 S4 disabled pci:0000:00:1c.1 PEX2 S4 disabled pci:0000:00:1c.2 PEX3 S4 disabled pci:0000:00:1c.3 PEX4 S4 disabled pci:0000:00:1c.4 PEX5 S4 disabled UHC1 S3 disabled pci:0000:00:1d.0 UHC2 S3 disabled pci:0000:00:1d.1 UHC3 S3 disabled pci:0000:00:1d.2 UHC4 S3 disabled EHCI S3 disabled pci:0000:00:1d.7 EHC2 S3 disabled pci:0000:00:1a.7 UH42 S3 disabled pci:0000:00:1a.0 UHC5 S3 disabled pci:0000:00:1a.1 UHC6 S3 disabled pci:0000:00:1a.2 AZAL S3 disabled pci:0000:00:1b.0 shadyabhi@shadyabhi-desktop:~$

    Read the article

  • Raid 5 GPT Partitioning

    - by user39325
    Hi, i have a Dell Poweredge r710 server with five 1 TB disks. All of them are in RAID 5. I was trying to install Centos but it sais "Your boot partition is on disk using GPT Partition...". I read somewhere that centos cant install on 2TB disk, so i made some partiotions smaller, but it's not workin. any idea? p.s. i am going to install Proxmox on that, but Proxmox same doesnt accept 2TB disks...

    Read the article

  • Vserver still interesting ?

    - by Kartoch
    I was using Vserver a long time ago. But since the last stable release (2007), BSD jails are offering many more functionalities. It is not clear if Vserver has still a future (still some development on it). have you dump (ou keep) vserver for your production servers ? For which reasons ?

    Read the article

  • how do you install php-devel

    - by user962449
    I keep getting dependency issues when I try to run yum install php-devel yum install --skip-broken php-devel .... --> Finished Dependency Resolution php-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-5.1.6-32.el5.i386 (base) php-cli-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-32.el5 is needed by package php-cli-5.1.6-32.el5.i386 (base) --> Running transaction check ---> Package php.i386 0:5.1.6-32.el5 set to be updated --> Processing Dependency: php = 5.1.6-32.el5 for package: php-devel ---> Package php-cli.i386 0:5.1.6-32.el5 set to be updated --> Finished Dependency Resolution php-devel-5.1.6-32.el5.i386 from base has depsolving problems --> Missing Dependency: php = 5.1.6-32.el5 is needed by package php-devel-5.1.6-32.el5.i386 (base) Packages skipped because of dependency problems: autoconf-2.59-12.noarch from base automake-1.9.6-2.3.el5.noarch from base imake-1.0.2-3.i386 from base php-5.1.6-32.el5.i386 from base php-cli-5.1.6-32.el5.i386 from base php-devel-5.1.6-32.el5.i386 from base Any ideas?

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • Is there a way to do something like LVM over NFS?

    - by warren
    I realize that since NFS is not block-level, LVM can't be used directly. However: is there a way to combine multiple NFS exports (from, say, 3 servers) into one mount point on a different server? Specifically, I'd like to be able to do this on RHEL 4 (or 5, and re-export the combined mount to my RHEL 4 server). expansion The reason I pegged lvm is that I want a bunch of exported mounts (servera:/mnt/export, serverb:/mnt/export, serverc:/mnt/export, etc) to all mount at /mnt/space so that my /mnt/space on this server (serverx) as one large filesystem. Yes, I know that re-exporting is generally a Bad Thing™ but thought it might work, if there was a way to accomplish this on a newer release as opposed to an older one From reading the unionfs docs, it appears that I can't use it over a remote connection - have I misread it? More accurately, since Union FS merges the contents of multiple branches, but makes them appear as one, it doesn't seem to go in reverse: I'm trying to mount a bunch of NFS points in a merged fashion, then write to them - not caring where data goes, a la LVM .

    Read the article

  • Mobile Identity Management at SuperValu

    - by Tanu Sood
    While organizations are fast embracing BYOD (Bring Your Own Device) culture to attract and retain best talent, improve productivity, bring agility and drive down costs, SuperValu coined their own term (and trend): TYDH – Take Your Device Home. Yes, SuperValu, a Minn based, 18,000 employees strong, food retailer handed out 2,200 iPads to store directors at locations across the country. The motivation behind this reverse trend? Phillip Black, Director of Identity & Access Management at SuperValu, shared the reasoning behind this trend in his talk at last week’s Oracle OpenWorld 2012. "It gives them productivity tools to better manage their store," says Black. Intrigued? Find out more in this recently published news article. And learn more about Oracle Identity Management 11gR2 mobile- and social- ready sign-on features today. Additional Resources: Press Release: Oracle announces Identity Management 11g Release 2 On-Demand webcast: Identity Management 11gR2 Launch Oracle Magazine: Security on the Move Website: Oracle Identity Management Blog Post: Mobile and Social Sign-on with Oracle Access Management

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • Using public interfaces on a server connected through a GRE tunnel

    - by Evan
    I'm pretty new to networking so please forgive any terminology mistakes. I have 2 servers connected with a GRE tunnel. Server1 (10.0.0.1) ---- Server2 (10.0.0.2) I want to be able to bind to the public IPs on Server2 using Server1. To do this, I setup virtual interfaces with Server2's public IPs on Server1 and then used routing rules on Server1 to route the packets through the GRE tunnel. On Server1: ip rule add from [Server2's first public IP] table gre ip rule add from [Server2's second public IP] table gre ip route add default via 10.0.0.2 dev gre1 table gre This works great and I can see the packets arriving via GRE on Server2. I can see the packet exiting the tunnel on Server2's gre1 device as shown: From Server1: ping -I [Server2's public ip] google.com tcpdump from Server2's GRE tunnel device: 12:07:17.029160 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) [Server2's public ip] > 74.125.225.38: ICMP echo request, id 6378, seq 50, length 64 This is exactly the packet I want. However, I'm not seeing it go out at all on eth0:0 (where Server2's public IP is bound to). I've tried to use routing rules to get packets coming from Server2's public IP (which would be coming out of dev gre1) to go through dev eth0 on the public default gateway and that doesn't work either. I'm at a loss, thank you to anyone who can help.

    Read the article

  • Nagios Woudn't Start, now won't Stop!

    - by Bart B
    I ran an update on a CentOS server running Nagios, after the update, Nagios failed to start. The error in the logs was: Failed to obtain lock on file /var/run/nagios.pid: Permission denied So, I checked and there was no pid file for Nagios in /var/run. I created one and gave it the following permissions: -rwxr--r-- 1 nagios nagios 6 May 31 11:58 nagios.pid Nagios then started and seems to be running normally. The only problem is, it refuses to stop now, so I can't re-start it to add new servers and services to be monitored! When I issue the command "service nagios stop", I get [FAILED], but nothing at all gets outputted to the log, and the service remains up. Any ideas on how I can get the service to stop now? I'm running the RPM version which was installed via yum from the RPMForge repositories. The server is CenotOS 5.5.

    Read the article

  • Video desktop recording and multiple WM displays, capturing nonactive display

    - by okobaka
    Two WM running on one local machine. WM - Fluxbox. Using ffmpeg to record desktop. ffmpeg -an -f x11grab -s 1920x1080 -r 25 -i :1.0 -sameq /tmp/video.mkv On one display everything works great, but not when i have another WM display startx -- :1. What i am doing right now is to switch ctrl+alt+f8 to display:1.0, and start recording with ffmpeg. Everything is fine until i switch back ctrl+alt+f7 to display:0.0, WM and captured video image freezes, but when i switch back ctrl+alt+f8 to display:1.0, it unfreeze and continue recording. So, how to make display:1.0 not to freeze, while on display:0.0? Tested some more. open [display 0.0] open [display 0.1] from [display 0.0] = open => [display 0.2] same problem For different users and same users results are the same. ffmpeg keeps recording that paused image. Looks like WM root window need to be active, to be recorded.

    Read the article

  • Hi , is there any wiki that supports ACL , ADI and API ? [closed]

    - by goutham
    Possible Duplicate: Which wiki satisfies ACL ADI and API ? Hi , is there any wiki that supports ACL , ADI and API ? and my requirement is we need a wiki that does three things 1. Uses ACL (Access Control lists - who can access what pages) 2. Needs AD (active directory integration) 3. Is scriptable via an API (meaning I can create a wiki page through an API in a program instead of logging in and manually typing in the page.) Ur help is appreciated Thanks in Advance Goutham

    Read the article

  • How to speed up apache

    - by Zen_silence
    We have a server with 8Cores, 16GB of RAM and RAID 0 SAS 10K drives. Our goal is to use this to serve a fairly simple php application quickly. We have tested all other components and we think we have narrowed it down to apache is our bottleneck. I am no apache guru I have done some research and tested a couple things but when i test with JMeter launching 100 concurrent connections against the server the first 10 - 20 come back quickly 30 - 100ms but the rest take between 1000ms to 3000ms. Anyone have any ideas on what to change in our apache config to make this faster right now its a vanilla install of apache.

    Read the article

  • FFMPEG Segfault Solutions

    - by Brentley_11
    I'm trying to convert a bunch of movies into h.264 mp4's using FFMPEG. These movies are sourced from various portable camcorders such as the Flip Mino HD and the Kodak ZI8. One issue I'm having with video from the ZI8 is it seems to be causing FFMPEG to segfault. Here is my command: ffmpeg -i 'XmasSailor720p60fps.MOV' -threads 2 -acodec libfaac -ab 96kb -vcodec libx264 -vpre hq -b 500kb -s 484x272 XmasSailor.mp4 Here is the output: FFmpeg version SVN-r20668, Copyright (c) 2000-2009 Fabrice Bellard, et al. built on Dec 2 2009 18:37:34 with gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) configuration: --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared libavutil 50. 5. 1 / 50. 5. 1 libavcodec 52.42. 0 / 52.42. 0 libavformat 52.39. 2 / 52.39. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 2 / 0. 7. 2 libpostproc 51. 2. 0 / 51. 2. 0 Seems stream 0 codec frame rate differs from container frame rate: 59.94 (60000/1001) -> 29.97 (30000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'XmasSailor720p60fps.MOV': Duration: 00:00:05.37, start: 0.000000, bitrate: 12021 kb/s Stream #0.0(eng): Video: h264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 11994 kb/s, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Metadata major_brand : qt minor_version : 0 compatible_brands: qt comment : KODAK Zi8 Pocket Video Camera comment-eng : KODAK Zi8 Pocket Video Camera [libx264 @ 0x99e1020]using SAR=1/1 [libx264 @ 0x99e1020]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64 [libx264 @ 0x99e1020]profile High, level 2.1 Output #0, mp4, to 'XmasSailor.mp4': Stream #0.0(eng): Video: libx264, yuv420p, 484x272 [PAR 1:1 DAR 121:68], q=10-51, 500 kb/s, 30k tbn, 29.97 tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 96 kb/s Metadata comment : Encoded with the Statusfirm Video Transcoder Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 [h264 @ 0x99de950]B picture before any references, skipping [h264 @ 0x99de950]decode_slice_header error [h264 @ 0x99de950]no frame! Error while decoding stream #0.0 frame= 20 fps= 0 q=13797729.0 size= 0kB time=0.66 bitrate= 0.6kbits/s frame= 39 fps= 37 q=13797729.0 size= 0kB time=1.30 bitrate= 0.3kbits/s frame= 48 fps= 30 q=33.0 size= 11kB time=0.10 bitrate= 903.0kbits/s frame= 58 fps= 27 q=31.0 size= 22kB time=0.43 bitrate= 421.0kbits/s frame= 67 fps= 25 q=29.0 size= 41kB time=0.73 bitrate= 462.6kbits/s frame= 75 fps= 23 q=29.0 size= 59kB time=1.00 bitrate= 486.7kbits/s frame= 83 fps= 22 q=29.0 size= 81kB time=1.27 bitrate= 521.9kbits/s frame= 90 fps= 21 q=29.0 size= 97kB time=1.50 bitrate= 530.1kbits/s frame= 98 fps= 20 q=29.0 size= 114kB time=1.77 bitrate= 526.9kbits/s frame= 106 fps= 20 q=29.0 size= 134kB time=2.04 bitrate= 537.7kbits/s frame= 114 fps= 19 q=29.0 size= 150kB time=2.30 bitrate= 533.7kbits/s frame= 122 fps= 19 q=29.0 size= 172kB time=2.57 bitrate= 547.8kbits/s frame= 130 fps= 19 q=29.0 size= 193kB time=2.84 bitrate= 557.5kbits/s frame= 136 fps= 18 q=29.0 size= 211kB time=3.04 bitrate= 570.0kbits/s frame= 144 fps= 18 q=29.0 size= 242kB time=3.30 bitrate= 599.5kbits/s frame= 152 fps= 17 q=30.0 size= 261kB time=3.57 bitrate= 598.6kbits/s frame= 157 fps= 15 q=-1.0 Lsize= 368kB time=5.21 bitrate= 579.3kbits/s video:302kB audio:61kB global headers:0kB muxing overhead 1.416371% [libx264 @ 0x99e1020]frame I:1 Avg QP:27.22 size: 8720 [libx264 @ 0x99e1020]frame P:48 Avg QP:25.15 size: 3759 [libx264 @ 0x99e1020]frame B:108 Avg QP:30.10 size: 1105 [libx264 @ 0x99e1020]consecutive B-frames: 0.6% 11.5% 28.8% 59.0% [libx264 @ 0x99e1020]mb I I16..4: 28.5% 47.6% 23.9% [libx264 @ 0x99e1020]mb P I16..4: 0.8% 1.3% 0.5% P16..4: 50.6% 17.7% 13.1% 0.0% 0.0% skip:15.9% [libx264 @ 0x99e1020]mb B I16..4: 0.2% 0.3% 0.1% B16..8: 44.0% 1.2% 2.6% direct: 5.1% skip:46.5% L0:45.5% L1:51.0% BI: 3.5% [libx264 @ 0x99e1020]final ratefactor: 23.51 [libx264 @ 0x99e1020]8x8 transform intra:49.9% inter:67.9% [libx264 @ 0x99e1020]direct mvs spatial:98.1% temporal:1.9% [libx264 @ 0x99e1020]coded y,uvDC,uvAC intra: 54.7% 76.1% 41.4% inter: 17.1% 24.4% 7.8% [libx264 @ 0x99e1020]i16 v,h,dc,p: 18% 52% 5% 25% [libx264 @ 0x99e1020]i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 22% 9% 7% 10% 10% 9% 8% 13% [libx264 @ 0x99e1020]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 18% 8% 8% 10% 13% 10% 9% 12% [libx264 @ 0x99e1020]Weighted P-Frames: Y:10.4% [libx264 @ 0x99e1020]ref P L0: 60.2% 15.3% 11.0% 7.6% 5.2% 0.7% [libx264 @ 0x99e1020]ref B L0: 72.6% 15.6% 11.8% [libx264 @ 0x99e1020]kb/s:471.17 Segmentation fault I'm wondering if anyone else has ran into similar issues. I wasn't able to find anything helpful via Google. Another question I have is if anyone knows of a company that offers paid support for FFMPEG. Thank you for your time.

    Read the article

  • using iptables to change a destination port but keep the ip the same.

    - by Scott Chamberlain
    I am playing around with transparent proxies, The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated.

    Read the article

  • Apache is running the wrong version of PHP.

    - by The Rook
    I am trying to enable CURL in php, and possibly update php as well. I have run into a road block where Apache seems to be running the wrong version of PHP. Here is some evidence. #lsof | grep php httpd 18397 nobody 135w REG 8,3 242 528537 /usr/local/apache/modules/libphp5.so I download the latest php 5.2.13 and build from source. I run a service httpd stop, do a make install and then I manually overwrite /usr/local/apache/modules/libphp5.so just to be safe. Using ldd i can see that its my fresh binary compiled with curl (the old one doesn't have curl.) ldd /usr/local/apache/modules/libphp5.so | grep curl libcurl.so.4 => /usr/local/lib/libcurl.so.4 (0x0064e000) I execute a phpinfo() and 5.2.3 is being executed instated of my new 5.2.13 and "curl" is nowhere to be found. What am I doing wrong? Why is curl disabled even though its statically linked? This is a RHEL 5.5 system.

    Read the article

  • Accessing the JSESSIONID from JSF

    - by Frank Nimphius
    The following code attempts to access and print the user session ID from ADF Faces, using the session cookie that is automatically set by the server and the Http Session object itself. FacesContext fctx = FacesContext.getCurrentInstance(); ExternalContext ectx = fctx.getExternalContext(); HttpSession session = (HttpSession) ectx.getSession(false); String sessionId = session.getId(); System.out.println("Session Id = "+ sessionId); Cookie[] cookies = ((HttpServletRequest)ectx.getRequest()).getCookies(); //reset session string sessionId = null; if (cookies != null) { for (Cookie brezel : cookies) {     if (brezel.getName().equalsIgnoreCase("JSESSIONID")) {        sessionId = brezel.getValue();        break;      }   } } System.out.println("JSESSIONID cookie = "+sessionId); Though apparently both approaches to the same thing, they are different in the value they return and the condition under which they work. The getId method, for example returns a session value as shown below grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692!1322120041091 Reading the cookie, returns a value like this grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692 Though both seem to be identical, the difference is within "!1322120041091" added to the id when reading it directly from the Http Session object. Dependent on the use case the session Id is looked up for, the difference may not be important. Another difference however, is of importance. The cookie reading only works if the session Id is added as a cookie to the request, which is configurable for applications in the weblogic-application.xml file. If cookies are disabled, then the server adds the session ID to the request URL (actually it appends it to the end of the URI, so right after the view Id reference). In this case however no cookie is set so that the lookup returns empty. In both cases however, the getId variant works.

    Read the article

  • Apache only transferring partial content from a Samba share

    - by thaBadDawg
    I have an Apache server running on CentOS 5.3. It currently hosts 12 sites with no known issues. (I say this to point out that up to this point my Apache installation has performed flawlessly) I'm adding a new site where the DocumentRoot of the new VirtualHost is a Samba share. When at the command line of the server I can cp video.m4v ~ and the whole file is copied properly to my home directory. But when I try to access the file from IE/Firefox/Safari/Chrome it only passes back a partial result of 33k. The same thing is happening with my image and audio files. If I make the files local to the server by copying them from the share and then serving them up then the files transfer. Any ideas?

    Read the article

  • Monitor open files limits, etc

    - by marcog
    We've been hitting the max open files limit on a few services recently. There are also a bunch of other limits in place. Is there a way to monitor how close processes are to these limits so we can be alerted when it's time to either up the limits or fix the root cause? On the same note, is it possible to view a log of these events so we know when a crash occurs it's because of hitting one of these limits?

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >