Search Results

Search found 26427 results on 1058 pages for 'google scripts'.

Page 392/1058 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Different routing rules for a particular user using firewall mark and ip rule

    - by Paul Crowley
    Running Ubuntu 12.10 on amd64. I'm trying to set up different routing rules for a particular user. I understand that the right way to do this is to create a firewall rule that marks the packets for that user, and add a routing rule for that mark. Just to get testing going, I've added a rule that discards all packets as unreachable: # ip rule 0: from all lookup local 32765: from all fwmark 0x1 unreachable 32766: from all lookup main 32767: from all lookup default With this rule in place and all firewall chains in all tables empty and policy ACCEPT, I can still ping remote hosts just fine as any user. If I then add a rule to mark all packets and try to ping Google, it fails as expected # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 # ping www.google.com ping: unknown host www.google.com If I restrict this rule to the VPN user, it seems to have no effect. # iptables -t mangle -F OUTPUT # iptables -t mangle -A OUTPUT -j MARK --set-mark 0x01 -m owner --uid-owner vpn # sudo -u vpn ping www.google.com PING www.google.com (173.194.78.103) 56(84) bytes of data. 64 bytes from wg-in-f103.1e100.net (173.194.78.103): icmp_req=1 ttl=50 time=36.6 ms But it appears that the mark is being set, because if I add a rule to drop these packets in the firewall, it works: # iptables -t mangle -A OUTPUT -j DROP -m mark --mark 0x01 # sudo -u vpn ping www.google.com ping: unknown host www.google.com What am I missing? Thanks!

    Read the article

  • Problems with Firefox and roaming profiles

    - by unknown (google)
    On our network, when the user is logged on a PC and then tries to login on his laptop he is getting this error: " Windows cannot copy "..\%username%\Application Data\firefox\profiles....\parent.lock file to the local C:..." I have tried deleting the profiles and also uninstalled firefox, but nothing seems to be resolving the problem. Any ideas ?

    Read the article

  • Pos receipt printer

    - by unknown (google)
    Is it possible to connect a receipt printer to a telephone for printing similar to how a Credit card terminal works when connected to a telephone line. Our clients donot have internet connections where we can connect the printer over Ethernet, so was thinking if it was possible to do the same via telephone line.

    Read the article

  • Why do I have untrusted certificates for Google, Yahoo, Mozilla and others?

    - by jackweirdy
    In the HTTPS/SSL section of chrome://chrome/settings, I see the following: What does this mean, and is there something wrong? I have a basic understanding of SSL/TLS - I'm not claiming to be completely familiar, but I'm fairly confident I know my way around it - but I don't understand why I have certificates installed on my machine specifically for these sites. From my understanding, I should have the certificates for Certificate Authorities, and any site I visit and use SSL/TLS should have a certificate signed by one of these trusted CAs for me to trust the site. My worry is that if someone has maliciously installed a certificate for these sites on my machine, they could perform a DNS spoofing attack (or a number of other attacks) to hijack my connection to my email account without me knowing, and as they've got the private counterpart to the certificate on my machine, decrypt the communication. NB: I'm also aware that CA certificates aren't just within Chromium and are used system wide as part of libssl - they're stored in /etc/ssl/certs. What I'd like to know is: Is this correct? - The big red boxes make me think no Is this malicious or benign? What can I do to resolve this problem? (If indeed it is a problem) Thanks :)

    Read the article

  • get property from XML using PHP

    - by Adnan
    Hello, I am using PHP's SimpleXML to get some values out of the following XML; - <entry> <id>http://www.google.com/m8/feeds/contacts/email_address%40gmail.com/base/0</id> <updated>2010-01-14T22:06:26.565Z</updated> <category scheme="http://schemas.google.com/g/2005#kind" term="http://schemas.google.com/contact/2008#contact" /> <title type="text">Customer Name</title> <link rel="http://schemas.google.com/contacts/2008/rel#edit-photo" type="image/*" href="http://www.google.com/m8/feeds/photos/media/email_address%40gmail.com/0/34h5jh34j5kj3444" /> <link rel="self" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/email_address%40gmail.com/full/0" /> <link rel="edit" type="application/atom+xml" href="http://www.google.com/m8/feeds/contacts/email_address%40gmail.com/full/0/5555" /> <gd:email rel="http://schemas.google.com/g/2005#other" address="[email protected]" primary="true" /> </entry> I can get the title with: $xml = new SimpleXMLElement($response_h1); foreach ($xml->entry as $entry) { echo $entry->title, '<br />'; } But how to get the address="[email protected]" property?

    Read the article

  • 0x0000007b vista harddrive swap

    - by unknown (google)
    I swap my hard drive to a new computer with vista 32 I get 0x0000007b stop error on bootup I tried using the cd to do the repair but it fails. I think it is because I have the wrong hard drive driver install for vista or it is missing. How can I fix this from the command line

    Read the article

  • Cron stopped working, partially working.

    - by Robi
    Our cron script stopped working in different dates in August. What can be the possible reasons? We did not change anything. Our hosting showed us a log where we can see that cron is executing our scripts. But, nothing is happening in our scripts. If we manually execute the scripts, we're getting correct results like before. I showed the commands to hosting and they showed me that the commands are working. What should I tell my hosting? what should I do? They are php scripts which are executed by CRON and they just post to facebook and twitter. They don't execute any hard or huge things. I even asked my hosting if we broke any rules.

    Read the article

  • Server 2012 GPO: PowerShell Script on Computer Startup not running

    - by Alex
    I've got a couple of Server 2012 instances on Amazon EC2 and I'm in the process of setting up the GPOs. All of the settings of the GPOs are being applied fine, except none of the PowerShell scripts specified on computer startup are actually being executed. The scripts are sitting on a UNC share which has Authenticated Users applied to it with full permissions. I'm assuming it probably has something to do with the Execution Policy, but I'm not sure how to automatically bypass it. I could just go in each instance and bypass the Execution Policy, but that's obviously not a good idea, plus I'm eventually going to connect Windows 7 computers that will be running the same scripts. How can I get the scripts to actually run? Google searches hasn't yielded a whole lot...

    Read the article

  • Mail Server using Postfix

    - by unknown (google)
    I have currently set up my web application on Amazon EC2 server. As a well known fact sending email from EC2 has a problem. As a cheap and long lasting solution instead of using "authsmtp" is it possible to rent a server and use it as a Mail Server? I am currently looking for cheap hosting which will give me root access so that it can be configured and used as a relayhost. I am curently using Postfix as MTA. Has any one implemented this before? I am curious about its feasibility of this solution. I guess common requirements are: Dedicated IP which is not black listed Open relay( open to my Server only) Any Tips for Header configurations to keep the mails out of spam folder. This is like exactly cloning authsmtp for personal use. Any suggestions for other Mail Server software instead of Postfix?

    Read the article

  • how to escape the ' in ssh?

    - by Dean Hiller
    I need to escape the ' in this command for ssh exec grep IPADDR /etc/sysconfig/network-scripts/ifcfg-eth0 |awk -F= '{print $2}' How do I escape that? I currentl y have this which does not work ssh host 'grep IPADDR /etc/sysconfig/network-scripts/ifcfg-eth0 |awk -F= '{print $2}'' nor does this ssh host 'grep IPADDR /etc/sysconfig/network-scripts/ifcfg-eth0 |awk -F= \'{print $2}\'' thanks, Dean

    Read the article

  • Both servers running keepalived become master

    - by pcent
    After a network failure,both servers running keepalived become master. When the network is reestablished, both keep the MASTER state. What could be causing it? Edited: Another information that might be relevant, each server has two NICs. Here is the virtual instance configuration: vrrp_instance VGAPP { interface eth0 virtual_router_id 61 state BACKUP nopreempt priority 50 advert_int 3 virtual_ipaddress { 10.26.57.61/24 } track_interface { eth0 } track_script { jboss_check #tomcat_check #interface_check #interface_check02 } notify_master "/opt/keepalived/scripts/set_state.sh MASTER" notify_backup "/opt/keepalived/scripts/set_state.sh BACKUP" notify_fault "/opt/keepalived/scripts/set_state.sh FAULT" notify_stop "/opt/keepalived/scripts/set_state.sh STOPPED"}

    Read the article

  • Dreamhost DNS CNAMEs work at my house, not for anyone else

    - by unknown (google)
    I have my domain BQQKSHELF.COM that I bought through Dreamhost. I set up a CNAME so that zach.bqqkshelf.com points to my app at zach.heroku.com. The app at Heroku is working fine. Everyone can agree on that. When I go to zach.bqqkshelf.com, everything seems to work okay too. When I ask my roommate to go to it, it works. When I go to it on my iTouch, it works. When I IM my friends and ask them to go to zach.bqqkshelf.com, they get a time out error. How is this possible?

    Read the article

  • How can I set up sendmail to forward all mail to an external MTA?

    - by unknown (google)
    We have multiple applications that currently talk SMTP to an external MTA. The emails have arbitrary destination domains (they're emails to be sent to our users), but all from the same internal domain ([email protected]). I want to set up an internal MTA (i guess with sendmail) that queues all mails, and have the internal MTA forward these emails to the external MTA, because the external MTA occasionally goes down and this causes various problems in our applications. I figure I can set up sendmail as a queuing middleware. If the above assumptions are correct, what would the sendmail configuration look like? The 'mailertable' feature looks promising, and so does 'SMART_HOST'. Any thoughts before I explore these possibilities? Jae

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • My PC Always Hangs Whenever I'll watch an online video or even start video chat

    - by unknown (google)
    My PC has recently started hanging completely whenever I start online video or video chat. Nothing works, even tried Ctrl+Alt+Del and mouse does not respond, leaving me only one option of hard reboot. I changed my monitor recently. Don't know whether it is the cause of the problem. I have Windows XP SP3 1GB of RAM NVIDIA Quadro PCI-E Series videocard Dell 1907FP monitor CPU is 3.0GHz Please suggest.

    Read the article

  • Translating CURL to FLEX HTTPRequests

    - by Joshua
    I am trying to convert from some CURL code to FLEX/ActionScript. Since I am 100% ignorant about CURL and 50% ignorant about Flex and 90% ignorant on HTTP in general... I'm having some significant difficulty. The following CURL code is from http://code.google.com/p/ga-api-http-samples/source/browse/trunk/src/v2/accountFeed.sh I have every reason to believe that it's working correctly. USER_EMAIL="[email protected]" #Insert your Google Account email here USER_PASS="secretpass" #Insert your password here googleAuth="$(curl https://www.google.com/accounts/ClientLogin -s \ -d Email=$USER_EMAIL \ -d Passwd=$USER_PASS \ -d accountType=GOOGLE \ -d source=curl-accountFeed-v2 \ -d service=analytics \ | awk /Auth=.*/)" feedUri="https://www.google.com/analytics/feeds/accounts/default\ ?prettyprint=true" curl $feedUri --silent \ --header "Authorization: GoogleLogin $googleAuth" \ --header "GData-Version: 2" The following is my abortive attempt to translate the above CURL to AS3 var request:URLRequest=new URLRequest("https://www.google.com/analytics/feeds/accounts/default"); request.method=URLRequestMethod.POST; var GoogleAuth:String="$(curl https://www.google.com/accounts/ClientLogin -s " + "-d [email protected] " + "-d Passwd=secretpass " + "-d accountType=GOOGLE " + "-d source=curl-accountFeed-v2" + "-d service=analytics " + "| awk /Auth=.*/)"; request.requestHeaders.push(new URLRequestHeader("Authorization", "GoogleLogin " + GoogleAuth)); request.requestHeaders.push(new URLRequestHeader("GData-Version", "2")); var loader:URLLoader=new URLLoader(); loader.dataFormat=URLLoaderDataFormat.BINARY; loader.addEventListener(Event.COMPLETE, GACompleteHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, GAErrorHandler); loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, GAErrorHandler); loader.load(request); This probably provides you all with a good laugh, and that's okay, but if you can find any pity on me, please let me know what I'm missing. I readily admit functional ineptitude, therefore letting me know how stupid I am is optional.

    Read the article

  • USB to IDE/SATA adapter

    - by unknown (google)
    I have an old IDE HDD that I am trying to pull files off of. I am using a USD to IDE/SATA adapter. I plug in the power and adapter plugs into the drive and it fires up. I plug the USB plug into my XP laptop and it installs the drivers. I can see USB Mass Storage Device under Device Manager. My problem is I can't see the drive in either Windows Explorer or under Disk Management under Computer Management. Not sure what I am doing wrong. Do I have to slave the old hdd? Do I have to make BIOS changes?

    Read the article

  • dd clone hard drive: Input/Output Error though "chkdsk" says OK

    - by unknown (google)
    Hi, I've used dd to clone hard drives before using 'dd' and a live cd, but have run into a problem. The issue: dd fails with an "Input/Output Error" on /dev/sda3 , even though windows "check disk" (chkdsk) says it's ok. Context: Trying to replace my laptop hard drive w/ a faster one of the same size Laptop has NTFS on a 320gb hard drive Booting into knoppix Knoppix recognizes 'original' drive (/dev/sda) I am using a usb connection for ‘new' drive (irrelevant, but just an fyi) Knoppix recognizes the usb drive as /dev/sdb Using dd, as follows: dd if=/dev/sda of=/dev/sdb "dd" gives the I/O error above at 82Gb (out of 320Gb) I then tried checking each partition as follows and found it failed on /dev/sda3: dd if=/dev/sda1 of=/dev/null dd if=/dev/sda2 of=/dev/null dd if=/dev/sda3 of=/dev/null I have ran windows xp chkdsk on the offending drive in both "find only" and "find and fix" mode and it reports no errors Question How can I find and fix the error on my original hard drive partition (i.e. /dev/sda3) so that dd reads it successfully?

    Read the article

  • c#: how do i search for a string with quotes

    - by every_answer_gets_a_point
    i am searching for this string: <!--m--><li class="g w0"><h3 class=r> within the html source of this link: "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=Santarus Inc?" this is how i am searching for it: d=html.IndexOf(@"<!--m--><li class=""g w0""><h3 class=r><a href=""",1); for some reason it is finding an occurence of it that is incorrect. (it says that it is at position 45 (another words d=45) but this incorrect here are the first couple hundred characters of the string html: <!doctype html><head><title>Santarus Inc&#8206; - Google Search</title><script>window.google={kEI:\"b6jES5nPD4rysQOokrGDDQ\",kEXPI:\"23729,24229,24249,24260,24414,24457\",kCSI:{e:\"23729,24229,24249,24260,24414,24457\",ei:\"b6jES5nPD4rysQOokrGDDQ\",expi:\"23729,24229,24249,24260,24414,24457\"},ml:function(){},kHL:\"en\",time:function(){return(new Date).getTime()},log:function(b,d,c){var a=new Image,e=google,g=e.lc,f=e.li;a.onerror=(a.onload=(a.onabort=function(){delete g[f]}));g[f]=a;c=c||\"/gen_204?atyp=i&ct=\"+b+\"&cad=\"+d+\"&zx=\"+google.time();a.src=c;e.li=f+1},lc:[],li:0,Toolbelt:{}};\nwindow.google.sn=\"web\";window.google.timers={load:{t:{start:(new Date).getTime()}}};try{}catch(u){}window.google.jsrt_kill=1;\n</script><style>body{background:#fff;color:#000;margin:3px 8px}#gbar,#guser{font-size:13px;padding-top:1px !important}#gbar{float:left;height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh

    Read the article

  • Cannot clear away BitDefender Internet Security system tray message

    - by unknown (google)
    I have BitDefender IS 2008 installed on my PC. In the last few days I have noticed the taskbar icon grumbling 1 issue requires your attention The Real-Time File Scanning is disabled and no amount of clicking "Fix" solved this issue. I have tried umpteen reboots and what have you. I really don't want to uninstall and re-install BitDefender (unless there's no other way out). How do I clear away the notification message?

    Read the article

  • Sharing folders with VirtualBox, Win7 Host and Ubuntu 9.10 Guest

    - by unknown (google)
    I have a development setup from this tutorial: http://www.sitepoint.com/blogs/2009/10/27/build-your-own-dev-server-with-virtualbox/ But what I can't figure out is how to share a folder on my Ubuntu virtualized machine with the host Win7. I want to use a Windows text editor to edit code that's on my Ubuntu server. I've tried using the Shared Folders setting, adding "/var/www" but it says that the path is not absolute. When I click on "other", it only allows me to browse folders on my Win7 host. Both the host and guest are 64-bit OS. Thanks in advance!

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >