Daily Archives

Articles indexed Thursday December 20 2012

Page 1/8 | 1 2 3 4 5 6 7 8  | Next Page >

  • `sh` access denied over ssh connection

    - by inspectorG4dget
    I have an ubuntu server and a windows XP client running Cygwin. The server ssh's into the client and tries to execute a shell script with some params, with the following command: ssh user@IP_ADDR 'sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR' where IP_ADDR is the IP address of client. However, while doing so, I get the following error: Access is denied. Thinking this might be a user permissions error, I tried running sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR on the client, on Cygwin, while logged in as user. This works as expected. Then I thought that this might be an error with the login that I use when I ssh into the client. So I executed this instead: ssh user@IP_ADDR 'whoami' and got back user. This happened even after I did chmod -R 777 /home/user/project on the client, in Cygwin. For kicks, I got on Cygwin on the client and did ssh localhost and manually executed sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR. This worked as expected. However, when I did ssh IP_ADDR from Cygwin and did ssh localhost and manually executed sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR, I get the same Access is denied. error. Why is this happening? How can I fix this? By the way, both the server and the client have each other's rsa public key for passwordless ssh

    Read the article

  • /usr/bin/python (Python 2.4) was deleted on CentOS 5. I compiled from source but yum is still broken. How can I get everything back to the way it was?

    - by Maxwell
    I saw a lot of other questions like this but none of them answered the exact part I am having trouble with (actually installing the Python RPM). Someone on my system deleted /usr/bin/python and /usr/bin/python2.4 on my 64 bit CentOS 5.8 installation. I recompiled Python 2.4 from source, but now whenever I try to yum install anything I get the following error: [root@cerulean-OW1 ~]# yum install httpd There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4 (#1, Dec 16 2012, 09:16:56) [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq I checked http://wiki.linux.duke.edu/YumFaq and it said the following: If you are getting a message that yum itself is the missing module then you probably installed it incorreclty (or installed the source rpm using make/make install). If possible, find a prebuilt rpm that will work for your system like one from Fedora or CentOS. Or, you can download the srpm and do a rpmbuild --rebuild yum*.src.rpm I tried going to http://rpm.pbone.net/index.php3/stat/4/idpl/17838875/dir/centos_5/com/python-2.4.3-46.el5.x86_64.rpm.html to install Python, which resulted in the following error: [root@cerulean-OW1 ~]# rpm -Uvh python-2.4.3-46.el5.x86_64.rpm error: Failed dependencies: python-libs-x86_64 = 2.4.3-46.el5 is needed by python-2.4.3-46.el5.x86_64 So I tried installing python-libs-x86_64, which resulted in the following: [root@cerulean-OW1 ~]# rpm -Uvh python-libs-2.4.3-46.el5_8.2.x86_64.rpm warning: python-libs-2.4.3-46.el5_8.2.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 192a7d7d Preparing... ########################################### [100%] package python-libs-2.4.3-46.el5_8.2.x86_64 is already installed file /usr/lib64/libpython2.4.so.1.0 from install of python-libs-2.4.3-46.el5_8.2.x86_64 conflicts with file from package python-libs-2.4.3-46.el5_8.2.x86_64

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • Why does storage's performance change at various queue depths?

    - by Mxx
    I'm in the market for a storage upgrade for our servers. I'm looking at benchmarks of various PCIe SSD devices and in comparisons I see that IOPS change at various queue depths. How can that be and why is that happening? The way I understand things is: I have a device with maximum (theoretical) of 100k IOPS. If my workload consistently produces 100,001 IOPS, I'll have a queue depth of 1, am I correct? However, from what I see in benchmarks some devices run slower at lower queue depths, then speedup at depth of 4-64 and then slow down again at even larger depths. Isn't queue depths a property of OS(or perhaps storage controller), so why would that affect IOPS?

    Read the article

  • How can I pass environment variables to a WSGI script, using uWSGI?

    - by orokusaki
    I've added the following line to /etc/environment: FOO_DEPLOYMENT_ENV="vbox" Upon logging in via SSH, I can echo $FOO_DEPLOYMENT_ENV and, of course, see vbox output to the shell. If I open a Python shell and run os.getenv('FOO_DEPLOYMENT_ENV'), it will return 'vbox', but the same code in my Python application, when run by uWSGI (as the www-data user), it does not see the environment variable. Clearly, this isn't a problem of uWSGI, and is rather a problem with my understanding of environment variables, or how they're properly set, and the contexts in which they can be retrieved. What am I doing or understanding incorrectly?

    Read the article

  • rsyslog - regex trouble

    - by benmccann
    I'm trying to setup the logentries service. If a log entry has a token in it then I would like to send it to api.logentries.com:10000. The token is a guid in the format aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee. Right now I'm doing: # If there's a logentries token then send it directly to logentries :msg, regex, ".*[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}.*" & @@api.logentries.com:10000 I checked the rsyslog debug logs and my regex is not matching, but I can't figure out why or how to fix it: 5245.961161378:7fb79b514700: Filter: check for property 'msg' (value ' fb1c507f-2ede-4d7f-a140-2bd8d56e133 - application - [play-akka.actor.default-dispatcher-1] - Found user: 4fb11ea5e4b00a1aeebe2800') regex '.*[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}.*': FALSE

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • Dynamic nginx domain root path based on hostname?

    - by Xeoncross
    I am trying to setup my development nginx/PHP server with a basic master/catch-all vhost config so that I can created unlimited ___.framework.loc domains as needed. server { listen 80; index index.html index.htm index.php; # Test 1 server_name ~^(.+)\.frameworks\.loc$; set $file_path $1; root /var/www/frameworks/$file_path/public; include /etc/nginx/php.conf; } However, nginx responds with a 404 error for this setup. I know nginx and PHP are working and have permission because the localhost config I'm using works fine. server { listen 80 default; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; include /etc/nginx/php.conf; } What should I be checking to find the problem? Here is a copy of that php.conf they are both loading. location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_index index.php; # Keep these parameters for compatibility with old PHP scripts using them. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Some default config fastcgi_connect_timeout 20; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_pass 127.0.0.1:9000; }

    Read the article

  • innodb recovery from .ibd files

    - by mr heLL
    My website has crashed a few days ago. The hosting company says some innodb database crashed. They sent a MySql data folder. I tried to restore the database, but phpmyadmin is only showing MyISAM tables. I checked the database with navicat. When I click innodb table, I got this error table 'xyz.wp_posts' doesn't exist. is there anyway to fix this on windows? Feel free to download db: www.degisimanaliz.com/xyzdb.tar.gz Very old backup: www.degisimanaliz.com/29_Ocak_Yedek_deganaliz.sql.gz

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • How can I configure MediaWiki to display numbers in titles?

    - by Wikis
    We would like to display numbers in the titles in our MediaWiki wiki. Specifically: in the table of contents numbers are shown like this: 1 Title 1.1 Subtitle 2 Another title However, on the page the titles (that map to the table of contents) appear like this: Title text Subtitle text Another title text That is, no numbers are shown. How can we show numbers in the page contents? There are three possible ways that this could be answered (that I can think of). In our order of preference, they are: Setting per user Setting per page Global setting (for all users)

    Read the article

  • Exchange 2003: Unrestrict send mail size for specific users / groups?

    - by Kip
    Good (insert appropriate time of day here) SF folks, I have the following situation; We have a message size limit for sending set at 20mb in Global Settings | Message Delivery. We have a limit of 50mb set at an external 3rd party spam vendor. I need to enable some users to be able to send messages that are upwards of around 40mb in size. However, when I set the Sending Message Size Maximum to 50mb within the delivery restrictions of a users exchange properties, it would appear that this does not win. It seems that the lowest value wins for this situation. I need to be able to allow certain users to send messages larger than the 20mb limit, but to have everyone else have the 20mb limit in place. How can I do this? The only way I could see was to raise the limit set in Global Settings | Message Delivery to 50mb and then set everyone elses (bar the people who need increased limit) delivery restrictions max size down. But I cannot see an easy way to do the last bit hence my post here looking for advice. There are valid reasons we need to send mail this size and whilst we are putting together other mechanisms for delivery this data, we still need to get this put in place. Thanks in advance Kip

    Read the article

  • Downmix ALL SYSTEM audio to mono - Windows 7

    - by Mike K.
    I'm deaf in one ear and want to use my headphones when playing a game and talking with my friends on Skype/TS/Mumble/etc while also sometimes listening to music. I need ALL my system audio to be downmixed to mono so that my ONE hearing ear gets ALL audio channels instead of split stereo audio. No, none of the other similar questions on superuser have a solution. My headphone properties does not have a 'Mono' option, I don't have a 'Headphone Virtualization' option, and my Realtek HD audio driver software doesn't have these options either (driver was updated 11/14/2012). Don't even talk about setting the balance of one side of the headphones to 0. You're not paying attention if you suggest that. JACK and Virtual Audio Cable didn't work. It's possible I configured them wrong, but I followed the steps I found in related questions and still got split stereo out. TL;DR I need a viable, working, software solution (I say software because I have a USB headset) for forcing ALL system audio to mono so that I can hear literally everything through the one earpiece. Thanks!

    Read the article

  • A driver (service) for this device has been disabled. Is how the code 32 starts off:

    - by E S
    A driver (service) for this device has been disabled. An alternate driver may be providing this functionality. (Code 32) No drive letter show in device manager, and the dvd/cd is now not useable because it is not seen. This all happened, when i starting using a new, external usb hard drive from Buffalo. I have win 7 64bit. Everything else looks to be working fine. I even out of desperation, tried to hook up, and external dvd that had worked fine in the past. Just too slow and ate up memory, so i never used it. It tries to use the same drives, and when you click to update drivers, it says this is the best one. HELP.... even if i wanted, (WHICH I DON'T), to use the factory win 7 re-installation dvd, how,lol. No drive to install it from in this situation. I am at a lose here, and Buffalo tec was of no help at all. Just said he could not help. Any help would be much appreciated.

    Read the article

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • I am trying to zip files individually, but the file type is unknown

    - by Jason Mander
    I am trying to zip some files with an unknown file type individually. I am using the following code in a batch script to do that: @ECHO OFF FOR %%A IN (bestbuy*nat*component.cpi*) DO "C:\Program Files\7-Zip\7z.exe" a -mx9 -m0=lzma2:d256m "%%~nA.7z" "%%A" The code will compress files individually ONLY if the file has an extension. Unfortunately the files that I have don't have any extension. In the code I am trying to zip files by doing a pattern match, the files are getting compressed into ONE file (which I do not want, I want each file compressed individually). Why does this code create separate zip files when the files have an extension (for example if I add .txt to the end of the files) and when there is no extension the code creates one zipped file. Can anyone please help me with the code to compress files with unknown file type so that each file gets compressed individually Your help would be greatly appreciated. Jason

    Read the article

  • excel is there a way to delete everything EXCEPT what is selected?

    - by yesmaybe
    I have an excel template with 20 tabs (worksheets) and plenty of data in each sheet. When a user opens a copy of the template, he will only need to use one tab. Is there a sneaky way to select that tab, or part of the contents of that tab and then 'delete all except selected'. That way the used file size will be much reduced of the excess clutter. There will be basic excel users adjusting this file so the smaller and easier to manage the better. Thanks, Andy

    Read the article

  • Driverpack Solution makes a whole bunch of programs

    - by Markasoftware
    I am using Driver Pack solution 12.3 to update my drivers. I used it and installed 55 drivers. I noticed that there are now several dozen new entries in add/remove programs under the name Windows Driver Package - Intel hdc, Windows Driver Package - Intel System, and Windows Driver Package - Intel USB with the driverpack solution logo next to them. When I hit uninstall for any of these, it says all devices using this driver will be removed, and I am scared of uninstalling anything I do not know about. I have no system restore points before I updated my driver. Will these extra driver packages slow down my computer or take up extra disk space? Can I uninstall them safely? I am using Windows 8, so refreshing the computer is an option.

    Read the article

  • Access folder right-click from inside the folder in Windows 7

    - by BrenBarn
    In Windows XP, if you had a folder open in Explorer, you could access the right-click context menu of a folder by right-clicking the folder icon in the titlebar of the Explorer window. In Win7 this no longer works. Right-clicking the background of the open folder, or right-clicking the folder icon in the info pane at the bottom, does not give the same context menu; there are items that are not in these menus, but are in the menu when I right-click on a folder from outside it. Given that I have a folder open in Explorer, how can I, without navigating out of the folder, access the same right-click context menu that I would get if I navigated to the parent folder and right-clicked my target folder?

    Read the article

  • Does static damage computer speakers?

    - by incarna
    I recently got a new pair of Klipsch Promedia 2.1's for my laptop. I unplug my laptop a lot to take it around but today the audio plug touched my plug for my monitor and a bit of static came out of the speakers. I've heard some rumors that static can damage speakers but I've never investigated this problem myself since I previously used a desktop and never unplugged them. The volume was at a normal volume- am I just being paranoid? Or could having the speaker port touching other bits of metal damage my speakers?

    Read the article

  • SFTP only works occasionally

    - by 82din
    I suddenly get this error using SFTP: Status: Connecting to example.com... Response: fzSftp started Command: open "[email protected]" 22 Command: Pass: ********* Status: Connected to example.com Status: Retrieving directory listing... Command: pwd Response: Current directory is: "/root" Command: ls Status: Listing directory /root Error: Connection timed out Error: Failed to retrieve directory listing I tried using FileZila, Cyberduck, Shell (Terminal), same result. However, it worked fine today (just a few seconds) in Passive mode. I guess something changed in my network, so I have tried both: Active and Passive mode: Connecting to probe.filezilla-project.org Response: 220 FZ router and firewall tester ready USER FileZilla Response: 331 Give any password. PASS 3.6.0.2 Response: 230 logged on. Checking for correct external IP address Retrieving external IP address from http://checkip.dyndns.org:8245/ Checking for correct external IP address IP <external IP> big-bf-ccc-f Response: 200 OK PREP 49565 Response: 200 Using port 49565, data token 380352881 PORT 186,15,222,5,193,157 Response: 200 PORT command successful LIST Response: 150 opening data connection Response: 503 Failure of data connection. Server sent unexpected reply. Connection closed Because I'm working behind a router, I get my external IP from http://checkip.dyndns.org:8245/ I also tested different range of ports.

    Read the article

  • Thunderbird 17.0 message filters destroying my emails

    - by Adrien
    I have been using Thunderbird for years in the following manner (now it's Thunderbird 17.0 on Windows 7): I offload my IMAP emails to my local inbox weekly, then apply hundreds of message filters to said emails in order to move and store them into a few hundred subfolders. It's always worked like a charm - until recently. Now, after I apply the message filters, the emails get moved but they are destroyed in the process - bodies are scrambled up, with bits and pieces of other emails or they are simply blank! How do I fix this?

    Read the article

  • Why doesn't this for loop work?

    - by evilsoup
    This is on Ubuntu 12.04 I'm trying to figure out how to get ffmpeg to do a batch conversion of FLACs to MP3, recursively. If I cd into a directory and use for f in *.flac; do ffmpeg -i "$f" -c:a libmp3lame -q:a 2 "${f/%flac/mp3}"; done that works perfectly fine. However, when I try this, it doesn't work: for f in "$(find . -type f -name *.flac)"; do ffmpeg -i "$f" -c:a libmp3lame -q:a 2 "${f/%flac/mp3}"; done It doesn't even throw up any useful errors (but here is the output anyway, no need to complain): evilsoup@enchantment:~/Music/Jean Sibelius$ for f in "$(find . -type f -name *.flac)"; do ffmpeg -i "$f" -c:a libmp3lame -q:a 2 "${f/%flac/mp3}"; done ffmpeg version git-2012-12-18-b7e085a Copyright (c) 2000-2012 the FFmpeg developers built on Dec 18 2012 19:23:11 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5) configuration: --enable-gpl --enable-libfaac --enable-libfdk-aac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-x11grab --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 80.100 / 54. 80.100 libavformat 54. 49.102 / 54. 49.102 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 28.100 / 3. 28.100 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/02. Symphony No.1.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/03. Symphony No.1.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/stripped2.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/05. Symphony No.1.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/stripped3.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/09. Andante festivo.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/08. Symphony No.3.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/01. Finlandia.flac ./Symphonies 1, 2, 3 & 5 (Oslo Philharmonic Orchestra Conducted by Mariss Jansons) Disc 1/07. Symphony No.3.flac ./Symphonies 1, 2, 3 & 5 I've tested the find command on its own, and it works as expected, so the problem has to be something to do with the interaction between find and for. I'm aware that I could do something with find's -exec option, but I can't find any way to do string substitution as I can with a bash for loop, and I'd rather not have a bunch of file.flac.mp3s to deal with, even if they could be fixed with a simple rename.

    Read the article

  • excel chart dynamic range based on values

    - by andrewk
    I'm trying to create a chart that auto updates itself from a data provided. The range of the chart is always fixed/locked. The issue I'm facing is that when a value for a certain month is 0, I want it to skip to the last non-zero month . Meaning the ranges selected forming the chart should exclude the month with the value zero. Which in most cases is the top month. The image below should clear it up. Is there a way to have the chart range be dynamic based on certain values?

    Read the article

  • Does extra hard drive cache make a difference for streaming video?

    - by johnny
    I am looking at the following two drives for a RAID device, which will be streaming normal things but also a lot of video: Seagate Constellation ES.3 ST4000NM0033 - hard drive - 4 TB - SATA-600 TOSHIBA DT01ACA300 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Will the 128 MB cache on the Seagate have an effect in my described scenario, compared to the 64 MB on the Toshiba? If so, what sort of difference can I expect? I'm using a qnap device, if that matters.

    Read the article

1 2 3 4 5 6 7 8  | Next Page >