Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 173/854 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Change Audio title from English to Sinhalese using ffmpeg

    - by user330461
    I insert an extra Sound track in my video file and it works well. ffmpeg -i news.mov -i news.wav -map 0:0 -map 0:1 -map 1:0 -pass 1 -vcodec libx264 -preset fast -b 512k -minrate 512k -maxrate 512k -bufsize 512k -threads 0 -f mp4 -an -y /dev/null && ffmpeg -i news.mov -i news.wav -map 0:0 -map 0:1 -map 1:0 -pass 2 -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -preset fast -b 512k -minrate 512k -maxrate 512k -bufsize 512k -threads 0 -f mp4 news.mp4 The default audio track come with the label "English" and I would like to give it a label "Sinhalese" The Second Audio track come up without a label as "track#1" and I would like to give that a label of "Tamil". How do I do that ?

    Read the article

  • Use dns suffixes only on certain wireless networks?

    - by eidylon
    Hello all, quick question. I'm a software guy and networking is all black magic to me! I have a laptop which I use at home and at the office. In order to be able to more easily reference our servers at work, I have our domain name in the DNS suffixes on my TCP/IP settings on my wireless connection. This all works beautifully and I can reference our servers simply by name only. Now the problem... When I go home, it still has those suffixes in there, and I cannot access other servers because it appends the DNS suffixes to the server names. Is there a way I can set up DNS suffixes so that they are only applied when connected to a certain wireless network (I'm thinking by SSID).

    Read the article

  • Apache ProxyPass ignore static files

    - by virtualeyes
    Having an issue with Apache front server connecting to a Jetty application server. I thought that ProxyPass ! in a location block was supposed to NOT pass on processing to the application server, but for some reason that is not happening in my case, Jetty shows a 404 on the missing statics (js, css, etc.) Here's my Apache (v 2.4, BTW) virtual host block: DocumentRoot /path/to/foo ServerName foo.com ServerAdmin [email protected] RewriteEngine On <Directory /path/to/foo> AllowOverride None Require all granted </Directory> ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> # don't pass through requests for statics (image,js,css, etc.) <Location /static/> ProxyPass ! </Location> <Location /> ProxyPass http://localhost:8081/ ProxyPassReverse http://localhost:8081/ SetEnv proxy-sendchunks 1 </Location>

    Read the article

  • Is there any software or hardware which lets you stop, slow down, speed up or even reverse time?

    - by tjrobinson
    Obviously I'm talking about time in terms of the PC clock rather than real time. We were testing an application we've developed at work by setting the clock forward and back to simulate different scenarios and I started thinking how useful it would be if you could adjust the rate(?) of the system clock with finer control. So you could make a minute pass in a second or a day pass in 30 seconds and watch how the program you're developing copes with changes in date and time. I'd be interested to hear if anyone knows of any software or hardware which can let you do some or all of the above.

    Read the article

  • sasl and tls with dns load balancing

    - by achal tomar
    I am using DNS load balancing in my centOs 5 server.The mail sent to the load balancer server are balanced by sending them to 4 more servers who then pass the mails to their destinations in the network.The mails are generated by a Php script which gives all the mail to the load balancer server. Now i want sasl and tls authentication in the load balancer server so that i can prevent the mail server from spammers,Can anyone tell me how to do this. The load balancer pass the mails to other servers based on equal mx record preference,so i want sasl authentication with Dns load balancing.

    Read the article

  • Parse java console output with awk

    - by Bob Rivers
    Hi, I'm trying to use awk to parse an output generated by a java application, but it isn't working. It seems that the command after the pipe isn't able to get/see the data throwed by the java app. I'm executing the following command (with the return generated by the command): [root@localhost]# java -jar jmxclient.jar usr:pass host:port java.lang:type=Threading ThreadCount 06/11/2010 15:46:37 -0300 org.archive.jmx.Client ThreadCount: 103 What I need it's only the last part of the string. So I'm tryng to use awk (with pipe at the end of the line |awk -F ':' '{print $4}': [root@localhost]# java -jar jmxclient.jar usr:pass host:port java.lang:type=Threading ThreadCount|awk -F ':' '{print $4}' But the output isn't being parsed. It throws the entire string: 06/11/2010 15:46:37 -0300 org.archive.jmx.Client ThreadCount: 103 I also tryed to use |cut -f4 -d":" with the same result: the string isn't parsed. So my question is, how do I parse the output in order to get just the number at the end of the string? TIA, Bob

    Read the article

  • redundant/multi-site terminal server

    - by Adam
    Hi We have a Hyper-V cluster running 5 virtual terminal servers using HA. We need to be able make this system redundant and so if this site was to fail our users could log into the backup system at another location and access their data via the terminal servers. Any ideas? We were thinking of maybe using a NAS which replicated the data to the other location in real-time(pass-through disks)? and having a similar Hyper-V cluster setup in the backup location. However we would need to create the users in both location and create a virtual mirror without the data ie applications, directories, settings etc. Is this the best way to achieve this? We have read that using Hyper-v pass through disks is a big performance de-grade.

    Read the article

  • How can I change exim's DKIM and SPF for emails sent?

    - by 0pt1m1z3
    I've now spent 2 hours trying to figure out this issue and I am about to give up and go to bed. I've been having issues with Gmail rejecting emails from my VPS server because of false spam alerts (probably caused by lfd sending too many emails). So I changed my Exim config to send emails from a different IP (my VPS comes with 3) and that fixed the issue. I also enabled DKIM and SPF on my domains for added measure. But now, all my emails appear as ("From: Sender Name via server.domain1.com") where server.domain1.com is my VPS hostname. I previously had the same issue in Outlook and turning off "Set SMTP Sender: headers" solved that problem. But I believe adding the DKIM and SPF now makes Gmail add "via server.domain1.com" to my messages. How do I fix this? This is a typical header for a message (as it appears at gmail): Delivered-To: [email protected] Received: by 10.60.44.163 with SMTP id f3csp248622oem; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received: by 10.50.106.200 with SMTP id gw8mr452788igb.10.1333081398523; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Return-Path: <[email protected]> Received: from domain2.com ([X.X.X.X]) by mx.google.com with ESMTPS id y1si810998igb.3.2012.03.29.21.23.18 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) client-ip=X.X.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=server.domain1.com; s=default; h=Date:Message-Id:From:Content-type:MIME-Version:Subject:To; bh=wF8bBRgh01EYg4t5DAeVPv1Ps906UVIeRnQCb/HvSYw=; b=k/Pg7lnrO+Ud/z1mOTv+O/3DiJzzQgyBhfIizIaFHM8tF/eNJt5P2k+9yQB224sxYstZIWwVRBJmiqvcM1QhARv1HWqWma0crppZ3JOn+LRHANan634OBi+58SIRA+gu; Received: (Exim 4.77) id 1SDTVE-0005HA-9Y for [email protected]; Fri, 30 Mar 2012 00:31:56 -0400 To: [email protected] Subject: Password Reset Request MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Sender Name <[email protected]> Message-Id: <[email protected]> Date: Fri, 30 Mar 2012 00:31:56 -0400 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.domain1.com X-AntiAbuse: Original Domain - domain2.com X-AntiAbuse: Originator/Caller UID/GID - [507 504] / [47 12] X-AntiAbuse: Sender Address Domain - server.domain1.com

    Read the article

  • kvm -net only passing broadcast, multicast, and guest destination traffic

    - by user52874
    Figured this out just last week, but I can't find it now. Even printed it out. Can't find that either. Frustrating...so...help! Configured a 'monitoring' nic on a kvm guest (running 'Security Onion, if it matters). I read (somewhere) that the default nic configuration for a kvm guest is to only pass broadcast traffic, multicast traffic, and traffic with the guest's mac as a destination. There is an option to override this behaviour, and pass all traffic. It's something like --mac-filtering=no, or --mac-restriction=no, or something like that. Worked beautifully. Does this look at all familiar to anyone who can clue me in to the exact option syntax? thx.

    Read the article

  • Is it possible to define a virtual directory in IIS and make the files relative to the physical dir

    - by Mikey John
    Is it possible to define a virtual directory in IIS and somehow make the files in that directory relative to the physical directory and not to the virtual directory ? For instance on my server I have the following folders: D:\WebSite\Css\myTheme.css, D:\WebSite\Images\image1.jpg I created a virtual directory on IIS resources.mysite: Inside my website I reference the sheet like this resources.mysite/myTheme.css But inside myTheme.css I reference pictures from ../Images/images1.jpg. So the problem is that image1.jpg is not found because it is relative to the physical folder and not the virtual folder on IIS. Can I solve this problem without modifying the style sheet ?

    Read the article

  • Recover a deleted webpage

    - by rc
    Suppose, a blog or a nice article was hosted on a website and it got deleted or worse the website was brought down. How do you view that web page? I tried searching for the cached version in Google. But, looks like the content was deleted long ago and is not listed in the search results directly. There are annotations to the link from many other sites, but still the actual content is not fully available. Now, can anybody help me see this page... I am actually looking for http://effectize.com/become-coolest-programmer :) And, moreover, in addition to bookmarking a favorite link, is it possible to cache the content of the link as well for later reference in case it gets deleted? EDIT: Looks like a URL can be cached for future reference. Try: http://backupurl.com/

    Read the article

  • Read/WRITE/Verify disk diagnostic tool for Mac OS X?

    - by Spiff
    It seems that there are many tools out there for Mac OS X that test a hard drive for bad blocks by doing a Read/Verify pass. That is, they read a block, then read it a second time, and verify that both reads yielded the same results. I need a tool that does a non-destructive Read/Write/Verify pass. It should read each block, write those same contents back out, and then read it again to verify. That way every block gets written, giving the hard drive a chance to spare out bad blocks. But since the same contents that were just read get written back out, it doesn't destroy data that wasn't already lost. I'm aware of several tools that can do Read/Verify, but I'm not aware of any that do Read/Write/Verify. Are there any tools that do what I want? Unix / open source tools that compile and run on Mac OS X are fair game too.

    Read the article

  • How to parse pipe with multiple commands independently?

    - by yarun can
    How can I parse output of a single command by multiple commands without truncating at each step? For example ls -al|grep -i something will pass every line that has "something" in it to the next pipe which is fine, but that also means every other line in the pipe is discarded since there wont be matching the condition. What I want is to be able to operate on single pipe by many commands independently. In this case it a pipe from Mutt which passes the whole message body. I want to get grep, sed, delete and assign each of these to bash variables maybe. Initially what I want is to be able to assign "message id" to a variable, "subject" to another variable etc Then pass those into proper commands arguments. Here is how it will be MessageBodyFromMutt|grep something -Ax -Bx |grep another thing from the original message| sed some stuff from the original message| cut from here to there Obviously the above line does not do what I want. I want all these commands to operate on the original message body. I hope it makes sense

    Read the article

  • putty pageant - forget keys after period of inactivity

    - by pQd
    in the environment where windows client computers are used to run putty to connect to multiple linux servers i'm considering moving away from password based authentication and using public/private key pairs with pass-phrases. using ssh-agent would be nice, but at the same time i'd like it to 'forget' the pass-phrases after given period of inactivity. it seems that putty's pageant does not provide such feature; what would you suggest as alternative? solutions that i'm considering: patching pageant code [might be tricky, code is probably quite rusty and project - sadly - stagnant] writing small custom application using GetLastInputInfo and killing pageant if the machine was idle for more than let's say 15 minutes [ yes, there'll be separate policy for locking the desktops as well ] using alternative ssh client and ssh agent. any suggestions? thanks!

    Read the article

  • Connecting to Google SMTP with Konica Minolta Printers

    - by VictorKilo
    I have spent the better portion of two days trying to get a number of Bizhub MFCs to connect to Google's SMTP service. Our company recently switched from an exchange server, which handled SMTP requests to Gmail. We have 20 branches each with different MFCs. I was able to get the Cannons connected, but the Konicas are giving me major problems. The three models that are giving me issues are the C203, C250 and the C280. I have used the following: smtp.gmail.com port 465 Gmail Username/Pass aspmx.l.google.com port 25 no authentication aspmx.l.google.com port 25 Gmail Username/Pass None of these methods are working despite the fact that all of those have worked on different makes/models. Any help would be greatly appreciated, I'm at my whit's end.

    Read the article

  • What's a good way to share a value in multiple places in a Word document?

    - by jcollum
    Let's say I have a value: \\myServer\dir1\dir2\dir3. I'd like this value to appear in multiple places in an MSWord document. However I only want to write it down once. What's a good way to do this? Fields seem like the answer but I can't get it to work; maybe it's not the answer. I'd like to be able to do this without any macros; it adds too much complexity. I need something more like Excel -- write a cell value here, reference it there, change the original value and the reference gets updated too. Edit: ideally I'd have the value updated automatically (fields don't seem to want to do that!).

    Read the article

  • Error with Internationalization extension while compiling php 5.4.8

    - by Umakant Patil
    I downloaded latest php version from php.net i.e. PHP 5.4.8 I configured it with following command ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --enable-intl --with-libdir=lib64 --with-pear --with-mcrypt --with-mhash --enable-mbstring --with-iconv --with-icu-dir=/usr --with-gettext --with-curl --with-mysqli --with-freetype --with-gd --with-curlwrappers --with-jpeg-dir=/usr --with-png-dir=/usr After this I run 'make' which start building / compiling PHP. After some time It throws me error ext/intl/.libs/php_intl.o: In function `zm_startup_intl': php-5.4.8/ext/intl/php_intl.c:651: undefined reference to `spoofchecker_register_Spoofchecker_class' php-5.4.8/ext/intl/php_intl.c:654: undefined reference to `spoofchecker_register_constants' collect2: ld returned 1 exit status make: *** [sapi/cli/php] Error 1 Spent lots of hours finding solutions. I can't come out of any. Does anyone know what this error exactly means? How to get rid of this error?

    Read the article

  • Port Redirection on Mac OS X Lion

    - by Andreas
    I have tried to solve this issue using pf but with no luck. Basically, I am trying to redirect incoming port 443 traffic to port 22. I have tried to set up a rule in a file and load it in pf but I get syntax error. Can anyone with more experience with pf provide some insight? Here's what I've attempted: pass in on en1 proto tcp from any to any port 443 rdr-to 127.0.0.1 port 22 and pass in quick proto tcp to port 443 rdr-to 127.0.0.1 port 22 I've been able to do this in MacOSX Snow Leopard with ipfw: sudo ipfw add 1443 forward 127.0.0.1,22 ip from any to any 443 in but it doesn't work in Lion (it gives me an Invalid Argument error).

    Read the article

  • Security of a free public VPN service

    - by Mark Belli
    I just started using VPNBOOK, which is a (very efficent) free VPN solution. I have a question: VPNBOOK user and pass used to connect to their vpn network are publicily available on their homepage; everybody uses them to connect to the vpn. Can a user intercept my wifi traffic and: Understand that my connections are directed to VPNBOOK servers. If point 1 is successful, then they could use VPNBOOK public user and pass to decrypt my traffic? I hope I am missing something, otherwise it would be a very big weakness and I would revert to a paid service (with a private account)

    Read the article

  • How can I delete a specific file from a set of results using the find command in Linux?

    - by PeanutsMonkey
    I have the following command that lists all files with the extension doc, docx, etc. find . -maxdepth 1 -iname \*.doc\* The command returns numerous files some of which I would like to delete. So for example the results returned are Example.docx Dummydata.doc Sample.doc I would like to delete Sample.doc and Dummydata.docx. How do I delete the files using the -exec option. Am I able to pass in the names of the files e.g. rm Dummydata.docx Sample.doc hence the command would look as follows find . -maxdepth 1 -iname \*.doc\* -exec rm Dummydata.docx Sample.doc Can I pass the names of the files within {} afterrm`? e.g. find . -maxdepth 1 -iname \*.doc\* -exec rm {Dummydata.docx} Sample.doc Is there a better way of doing it?

    Read the article

  • In Windows XP, is it possible to disable user credential caching for particular users

    - by kdt
    I understand that when windows caches user credentials, these can sometimes be used by malicious parties to access other machines once a machine containing cached credentials is compromised, a method known as "pass the hash"[1]. For this reason I would like to get control over what's cached to reduce the risk of cached credentials being used maliciously. It is possible to prevent all caching by zeroing HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\CachedLogonsCount, but this is too indiscriminate: laptops users need to be able to login when away from the network. What I would like to do is prevent the caching of credentials of certain users, such as administrators -- is there any way to do that in Windows XP? http://www.lbl.gov/cyber/systems/pass-the-hash.html

    Read the article

  • GMail detecting mail as spam

    - by Petru Toader
    I've been trying for a long time to get our company's mail server send mail that will get accepted by the GMail spam filter. I have managed making it work for Yahoo Mail and Hotmail, sadly GMail is still marking our mails as spam. I have configured DKIM, SPF, DMARC and verified our mail server IP address against blacklists. I also have pasted here the headers GMail gets when we send a mail. Delivered-To: [email protected] Received: by 10.42.215.6 with SMTP id hc6csp107427icb; Wed, 20 Aug 2014 07:34:26 -0700 (PDT) X-Received: by 10.194.100.34 with SMTP id ev2mr59101019wjb.76.1408545265402; Wed, 20 Aug 2014 07:34:25 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.phyramid.com (mail.phyramid.com. [178.157.82.23]) by mx.google.com with ESMTPS id dj10si4827754wib.79.2014.08.20.07.34.24 for <[email protected]> (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 20 Aug 2014 07:34:25 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 178.157.82.23 as permitted sender) client-ip=178.157.82.23; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 178.157.82.23 as permitted sender) [email protected]; dkim=pass [email protected] Received: from localhost (localhost [127.0.0.1]) by mail.phyramid.com (Postfix) with ESMTP id ED2BB2017AC for <[email protected]>; Wed, 20 Aug 2014 17:33:23 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=phyramid.com; h= content-type:content-type:mime-version:x-mailer:subject:subject :message-id:to:from:from:date:date; s=dkim; t=1408545197; x= 1409409197; bh=e04RtoyF7G39lfCvA9LLhTz4nF64siZtN5IYmC18Xsc=; b=o +6mO8Uz4Uf1G4U2q6tKUiEy2N2n/5R2VtPPwIvBE5xzK/hEd2sDGMxVzQVgIDCsK Q0Xh+auPaQpxldQ+AEcL2XSZMrk/g0mJONjkpI19I5AwGIJCR1SVvxdecohTn9iR bCHzrGi2wAicfDBzOH6lUBNfh2thri79aubdCYc97U= X-Amavis-Modified: Mail body modified (using disclaimer) - mail.phyramid.com X-Virus-Scanned: Debian amavisd-new at mail.phyramid.com Received: from mail.phyramid.com ([127.0.0.1]) by localhost (mail.phyramid.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3JcgXZAXeFtX for <[email protected]>; Wed, 20 Aug 2014 17:33:17 +0300 (EEST) Received: from whiterock.local (unknown [109.98.21.30]) by mail.phyramid.com (Postfix) with ESMTPSA id 05CAE200280 for <[email protected]>; Wed, 20 Aug 2014 17:33:15 +0300 (EEST) Date: Wed, 20 Aug 2014 17:34:15 +0300 From: Company Mail <[email protected]> To: [email protected] Message-ID: <[email protected]> Subject: hey there! X-Mailer: Airmail (247) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Disposition: inline How was your summer? ---- Thanks a lot!

    Read the article

  • AWS autoscaling. Launch Config/Auto Scaling Group and VPC instance with two ifaces

    - by icalvete
    I want create an Launch Config/Auto Scaling Group to build instances inside an VPC with two subnets ("frontend" and "backend") I need that this instances have two ifaces. One in "frontend" subnet and one in "backend" subnet. I can't see how do that. It's no posible from AWS console and neither with aws cli. http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-auto-scaling-group.html Launch Config don't say nothing about this. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html Ideas? Thanks!!!

    Read the article

  • Filter data in sheets from a master sheet

    - by sam
    I have a 'master sheet' with lots of furniture data in it, in column A there are the suppliers names. What I would like is to be able to have my master sheet with all the info and then sub sheets named by supplier; in these sub sheets I would like to reference the master sheet and pull out all of the items that are from that supplier. For example: I would have a sheet called 'Ikea' which would look in the master sheet and search the A column for all entries of 'Ikea'. If present, copy or reference that row 1:12 in the 'ikea' sheet. I would like to do it all dynamically using references rather than copying the data. Also, I would like it to auto update rather than having to run a macro to recalculate it each time. Can this be done with formulars rather than macros?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >