Search Results

Search found 6137 results on 246 pages for 'forward mails'.

Page 186/246 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Syncing Google Desktop Scratch Pad

    - by Anders Frey
    I'm a long time user of Google Desktop Scratch Pad and I would like to be able to put the note in the cloud and make it accessible from all my electronic units. I'm working towards changing the filepath Scratch Pad uses to retrieve the .txt to lead to a DropBox folder. As the Desktop Scratch Pad is discontinued I've had no luck in retrieving the API, but what I've got so far is this: The scratch pad data is located at: C:\Users[user]\AppData\Local\Google\Google Desktop\a3d83d5fa2e9\scratchpad.txt The registry keys related to Google Desktop is located at: HKEY_CURRENT_USER\Software\Google\Google Desktop I'm guessing the Scratch Pad app itself is located at: HKEY_CURRENT_USER\Software\Google\Google Desktop\Components I have limited experience with the registry, so I'm not able to translate the binary and hexadecimals, but I'm hoping that the path location is in there somewhere. I've tried using a bunch of other noteapps (including the 'new' scratch pad in chrome) but haven't been able to find one that suits my needs as Desktop Scratch Pad. Hence the effort in this matter. I may be way off and I'm not sure if this is possible to do, but I'm looking forward to hearing your thoughts.

    Read the article

  • [tcpdump] Proxy delegate refusing connexion ?

    - by simtris
    Hi guys, I'm a little disapointed ! My aim was to build a VERY simple smtp proxy under debian to handle mail from a port (51234) and forward it to the standard 25 port. I compile and install a "delegate" witch can handle easily that. It's working very well like that : delegated SERVER="smtp://anotherSmtpServer:25" -P51234 The strange thing is, it's working on my virtual test machine and on the dedicated server in local but I can't manage to use it trought internet. I test it like that. telnet [mySrv] 51234 Of course, no firewal, no deny host, no ined/xined, the service delegated is listening on the right port ... 2 clues : The port is answering trought internet with nmap as "51234/tcp open tcpwrapped" have a look at the tcpdump following : 22:50:54.864398 IP [myIp].1699 [mySrv].51234: S 2486749330:2486749330(0) win 65535 22:50:54.864449 IP [mySrv].51234 [myIp].1699: S 2486963525:2486963525(0) ack 2486749331 win 5840 22:50:54.948169 IP [myIp].1699 [mySrv].51234: . ack 1 win 64240 22:50:54.965134 IP [mySrv].43554 [myIp].auth: S 2485396968:2485396968(0) win 5840 22:50:55.243128 IP [myIp] [mySrv]: ICMP [myIp] tcp port auth unreachable, length 68 22:50:55.249646 IP [mySrv].51234 [myIp].1699: F 1:1(0) ack 1 win 46 22:50:55.309853 IP [myIp].1699 [mySrv].51234: . ack 2 win 64240 22:50:55.310126 IP [myIp].1699 [mySrv].51234: F 1:1(0) ack 2 win 64240 22:50:55.310137 IP [mySrv].51234 [myIp].1699: . ack 2 win 46 The part "auth" seems suspect to me but didn't ring a bell. I could certaily do with some help. Thx a lot !

    Read the article

  • Trouble with port 80 nating (XenServer to WebServer VM)

    - by Lain92
    I have a rent server running XenServer 6.2 I only have 1 public IP so i did some NAT to redirect ports 22 and 80 to my WebServer VM. I have a problem with the port 80 redirection. When i use this redirection, i can get in the WebServer's Apache but this server lose Web access. I get this kind of error : W: Failed to fetch http://http.debian.net/debian/dists/wheezy/main/source/Sources 404 Not Found [IP: 46.4.205.44 80] but i can ping anywhere. XenserverIP:80 redirected to 10.0.0.2:80 (WebServer). This is the port 80 redirection part of my XenServer iptables : -A PREROUTING -i xenbr1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0 .2:80 -A INPUT -i xenbr1 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT COMMIT What is wrong in my configuration? Is there a problem with XenServer? Thanks for your help ! Edit : Here is my iptables full content : *nat :PREROUTING ACCEPT [51:4060] :POSTROUTING ACCEPT [9:588] :OUTPUT ACCEPT [9:588] -A PREROUTING -p tcp -m tcp --dport 1234 -j DNAT --to-destination 10.0.0.2:22 -A PREROUTING -i xenbr1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0 .2:80 -A POSTROUTING -s 10.0.0.0/255.255.255.0 -j MASQUERADE COMMIT *filter :INPUT ACCEPT [5434:4284996] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [5014:6004729] -A INPUT -i xenbr1 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT COMMIT Update : I have a second server with 10.0.0.3 as IP and it has the same problem that 10.0.0.2 has.

    Read the article

  • How to host an ssh server?

    - by balki
    Hi, I have a broadband internet connection. I have an wireless modem (Airtel India). I don't have a static ip address. I want to host a ssh/web/ftp server to be visible to the outside world just for testing and learning purpose so I can ask my friend to connect to my current ip address and test. My modem has an admin interface which allows to port forward and open ports. I set up ssh server as shown and checked if port 22 is open using this website , Port Scan And port 22 is open. I have an openssh server running and it works if i do, ssh [email protected] which is my local ip address but doesn't work if i do ssh [email protected] where 122.xx.xx.xx is my external ip address of my modem which i checked from whatismyipaddress.com. Since it looks like the port is open, I wonder if there is some setting I need to change in my server config to expose my server. How should I go about solving this?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • After adding skip-innodb mysql doesn't start

    - by Pentium10
    I am trying to setup these values: #skip-bdb #skip-locking #skip-innodb When I add them to /etc/mysql/my.cnf and even if I turn ON of of, them after I do the service restart mysql fails to start, and no error message printed. sudo service mysql restart [ ok ] Stopping MySQL database server: mysqld. [FAIL] Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! Previously I made sure that I have no InnoDB tables, and all files of that type were removed. I tried looking for error files but I couldn't locate it: /var/log/mysql.err is a 0 byte file /var/log/mysql folder has no files rsyslog was changed in past with inetutils-syslogd, and this might have changed the log files, and it could be the reason why I don't see any error logs, and I am stuck how to look or go forward.

    Read the article

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

  • Technology mash: is this possible?

    - by Jon Story
    I'm in the process of setting up my own DNS+hosting on a couple of VPS and my home machines, mostly for academic/learning purposes, but also for convenient accessing of my files, hosting my personal websites, private git repositories etc. I've got a main web server with DNS, and a slave DNS server. I've also got a couple of machines at home doing file hosting, video streaming and all that fun stuff. I'm intending to use my VPS's to provide myself with a dynamic DNS system so that I can point mydomain.com at my DNS servers, with home.mydomain.com going into my home network via a raspberry pi. HOWEVER.... I've not got access to the network infrastructure at home (rented accommodation with managed internet), so I can't forward the ports on the router to my own machines. As such, I'm wondering if it's possible to route all the traffic via an SSH/HTTP tunnel through one of the VPS? My plan is to have the raspberry pi provide a VPN into my home network. The raspberry pi uses SSH to connect to the VPS, and the VPS forwards any traffic to home.mydomain.com via the tunnel to the raspberry pi. Is this even possible, and how do I go about it? I don't mind getting my hands dirty with coding and low level tools, I'm just not sure where to start or what the best way to go about it is.

    Read the article

  • SNMP closed state in CentOS

    - by anksoWX
    I'm having a problem here, I've added to my IPtables rules this: -A INPUT -p tcp -m state --state NEW -m tcp --dport 161 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT but when I scan with nmap or any other tool it says this: Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 161/tcp closed snmp also when I am doing: netstat -apn | grep snmpd tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 3669/snmpd<br> udp 0 0 0.0.0.0:161 0.0.0.0:* 3669/snmpd<br> unix 2 [ ] DGRAM 226186 3669/snmpd Also: service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:161 5 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:161 6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 7 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) num target prot opt source destination Any idea what's going on? There is no UDP in closed/open state. what do I have to do?

    Read the article

  • Delayed internet access

    - by Joel Coel
    When I (and presumably my users) first start up or log in to my computer I can't get internet access until several minutes after logging in. Internet pages like serverfault.com will time out. During this time I can access internal web servers. Sometimes pinging the gateway seems to fix the problem. I'm using Windows 7 on this machine with wifi, and the problem seems limited to the wifi network, which is on a separate vlan. The wired network does not share the problem, but I know it's not the wifi connection itself because the internal sites work. The wifi access point is attached to a 3Com 4200 switch, with the port set for vlan 2 untagged, vlan 1 tagged. The 4200 has a fiber connection to a 3Com 4900SX fiber switch that acts almost as a router here. The fiber connection is vlan 1 untagged vlan 2 tagged at both ends. The gateway is then attached to a different 4200 (vlan 1 untagged, vlan 2 tagged) that has a similar fiber connection to the 4900SX. vlan 2 has 192.168.8.0/22 IPs, vlan 1 has 10.1.0.0/16 IPs. The 4900SX has an interface for both vlans (10.1.1.1/192.168.8.1), as does the gateway (10.1.1.5/192.168.8.5). There is one dchp server for both vlans on the same switch as the gateway. It chooses a dhcp scope based on the interface used by the 4900sx to forward the dhcp request. There is also a network access list on the 4900sx set to deny all vlan2 traffic to any 10.1.x.x host, with exceptions made for a few servers, including dhcp, 4900sx, and the gateway. I think that about covers it. Any ideas on why internet access would be delayed like this?

    Read the article

  • Connect two networks

    - by Meek Barrios
    Connecting two different offices with a wireless link and linux boxes. Hardware: 2 CISCO RV42, 2 Dual Homed Linux Boxes running debian, 2 2Wire and 2 AirMax 5 Configuration is: Office A LAN A (10.1.1.0/24) -> RV42 A (WAN1 - 10.1.1.254) -> 2Wire A (Internet) LINUX A ( ETH0 (LAN) 10.1.1.253, ETH1 (LINK) (10.1.3.3) Wireless Link --- AirMax A <-> AirMax B connected as Wireless Bridge Office B LAN B (10.1.2.0/24) -> RV42 B (WAN1 - 10.1.2.254) -> 2Wire B (Internet) LINUX B ( ETH0 (LAN) 10.1.2.253 -> ETH1 (LINK) (10.1.3.4) Network configuration is: LAN A - Default Gateway 10.1.1.254 RV42 A - Static Route 10.1.3.0/24 on 10.1.1.253 Static Route 10.1.2.0/24 on 10.1.1.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux A - ETH0 10.1.1.253 netmask 255.255.255.0 gw 10.1.1.254 ETH1 10.1.3.3 netmask 255.255.255.0 gw 10.1.3.1 AIRMAX A - 10.1.3.1 netmask 255.255.255.0 gw 10.1.3.1 LAN B - Default Gateway 10.1.2.254 RV42 B - Static Route 10.1.3.0/24 on 10.1.2.253 Static Route 10.1.1.0/24 on 10.1.2.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux B - ETH0 10.1.2.253 netmask 255.255.255.0 gw 10.1.2.254 ETH1 10.1.3.4 netmask 255.255.255.0 gw 10.1.3.2 AIRMAX B - 10.1.3.2 netmask 255.255.255.0 gw 10.1.3.2 Both linux have ip_forward set to 1 and the following on the iptables: iptables -F iptables -X iptables -P FORWARD ACCEPT iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT I can ping from Linux B any ip on 10.1.1.0/24 segment and on linux A any ip on 10.1.2.0/24 segment however I cannot connect to HTTP or FTP on those machines. From LAN A I cannot see any other network. I'm looking for some advice for this configuration or a better solution. Regards

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Did my registrar screw up or is this how name server propagation works?

    - by Brad
    So my company has a number of domains with a large registrar that shall go unnamed. We are making some changes to our DNS infrastructure and the first of those is we are moving our secondary DNS from one server on site to four servers offsite. So we updated the name servers for each domain at the registrar by removing the entry for the old secondary name server and adding the four new ones. I monitored the old secondary server for requests and when I saw no new requests had been made for 24 hours I shut it down. That was this morning. I assumed at this point everything was good. Unfortunately this was my mistake. I should have gone and made sure name servers at large were returning the correct NS records. So this afternoon we were performing maintenance on our primary DNS server and we shut it down. This is when I started getting alerts from our external monitoring. I checked and sure enough, the DNS server used there reported the only NS record for our primary domain was the primary name server. The new secondary servers were not listed and neither was the old secondary. Is it unreasonable of me to have assumed that because the update was from ns1.mydomain.com ns2.mydomain.com to ns1.mydomain.com ns1.backupdns.com ns2.backupdns.com ns3.backupdns.com ns4.backupdns.com in one step at the registrar that there should be no intermediate state where the only NS record was for ns1.mydomain.com? Going forward to be safe obviously I will always leave the old name servers alone until after I'm 100% sure the new ones have propagated and only then remove the old name servers from the registrar. However, I'd still like to know if my registrar screwed up or if my expectation was unreasonable.

    Read the article

  • Cheap Solution for Routing a Toll Free Number to a Standard POTS Number

    - by VxJasonxV
    I do some technical work for an Internet Radio Show/Podcast, and need to fix something that has been broken for a while. The hosts have a Skype-In number to take listener calls, and for convenience sake, I bought and paid for a toll free number for a period of time. I used to use Asterlink for routing calls, but they folded and sent my number to OneBox, but they're ridiculously expensive by comparison. I'm looking for a cheap solution for this one simple task. Forward toll free calls to a skype-in number. The definition of cheap is as cheap or cheaper than Asterlink was. I paid something like $2 a month, and then the termination/call rate, which was a fraction of a sent for termination, and only whole cents after some serious time on the call. A $20 preload lasted me months at a time. I don't want to be upsold too, I want a simple web based management screen (CDR/stats are fun!), and obviously, it needs to be reliable. What vendors out there are you a fan of that solves this need?

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Iptables ignoring a rule in the config file

    - by Overdeath
    I see lot of established connections to my apache server from the ip 188.241.114.22 which eventually causes apache to hang . After I restart the service everything works fine. I tried adding a rule in iptables -A INPUT -s 188.241.114.22 -j DROP but despite that I keep seeing connections from that IP. I'm using centOS and i'm adding the rule like thie: iptables -A INPUT -s 188.241.114.22 -j DROP Right afther that I save it using: service iptables save Here is the output of iptables -L -v ` Chain INPUT (policy ACCEPT 120K packets, 16M bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any c-98-210-5-174.hsd1.ca.comcast.net anywhere 0 0 DROP all -- any any c-98-201-5-174.hsd1.tx.comcast.net anywhere 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any www.dabacus2.com anywhere 0 0 DROP all -- any any 116.255.163.100 anywhere 0 0 DROP all -- any any 94.23.119.11 anywhere 0 0 DROP all -- any any 164.bajanet.mx anywhere 0 0 DROP all -- any any 173-203-71-136.static.cloud-ips.com anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any 74.122.177.12 anywhere 0 0 DROP all -- any any 58.83.227.150 anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 186K packets, 224M bytes) pkts bytes target prot opt in out source destination `

    Read the article

  • Office 2010 OCT Outlook Filepaths

    - by vlannoob
    I'm playing around with customizing Office 2010 installs on my network, normally I just do a full manual install, but as the environment grows and the lazier I get its becoming a pain to do it manually every time. I've read up and downloaded the Office 2010 OCT tool and it looks relatively straight forward - with one exception - the Outlook Profile. I can 'get around it' by just leaving it all as default (or not enabling offline use) but I'd like to customise it slightly so that its all setup no matter who logs onto the PC. The only issue I have, and my question is: In the OCT - Outlook section What do you enter into the Path and Filename for the OST file and the Offline Address book seetings under Enable Offline Use section? I'm sweet with everything else - just that one section, and I think if I bugger that one it will kill the whole Outlook Profile?? It would need to go into each users unique filepath for their profile correct? I have a fair idea of what should be there but I'm struggling with the correct syntax. I know this is a stupid question....but its late in the day and my brain is fried ;) As usual - any and all help/assistance is appreciated ;)

    Read the article

  • Easiest way to allow direct HTTPS connection in Intercept mode?

    - by Nicolo
    I know the SSL issue has been beaten to death I'm using DNS redirect to force my clients to use my intercept proxy. As we all know, intercepting HTTPS connection is not possible unless I provide a fake certificate. What I want to achieve here is to allow all HTTPS requests connect directly to the source server, thus bypassing Squid: HTTP connection Proxy by Squid HTTPS connection Bypass Squid and connect directly I spent the past few days goolging and trying different methods but none worked so far. I read about SSL tunneling using the CONNECT method but couldn't find any more information on it. I tried a similar method in using RINETD to forward all traffic going through port 443 of my Squid back to the original IP of www.pandora.com. Unfortunately, I did not realize all other HTTPS requests are also forwarded to the IP of www.pandora.com. For example, https://www.gmail.com also takes me to https://www.pandora.com Since I'm running the Intercept mode, the forwarding needs to be dynamic and match each HTTPS domain name with proper original IP. Can this be done in Squid or iptables? Lastly, I'm directing traffic to my Squid server using DNS zone redirect. For example, a client requests www.google.com, my DNS server directs that request to my Squid IP, then my transparent Squid will proxy that request. Will this set up affect what I'm trying to achieve? I tried many methods but couldn't get it to work. Any takes on how to do this?

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • 7ZIP - Command Line Compression | Can Never Keep it Simple

    - by OneTwoYou
    I've been Googleing for a few hours on how to just compress a file inside a directory and I can't find anything. I found how to just compress a folder in general. Now I wish to know how I can compress a folder in a folder with a file. Current code: 7zG.exe a -tzip "test.zip" dontcompressme/compressme/new.txt pause As you can see above, I don't want to compress the first folder, but only the second and what ever is within that folder. I have the 7zG.exe sitting in the main folder and I have some files that are three folders in, but I don't know how to only compress those. Here is my directory list: Folder One (don't compress) Folder Two (don't compress) Folder Three (okay to compress) Document One.txt (okay to compress) Document Two.txt (okay to compress) Index.html (okay to compress) Does anyone know how I can do this in the most simplest way ever invented by man? Cause whenever I go to a website using Google it goes throw all these methods on how to compress a folder, but not do it the way I wish it to do. It makes me kinda upset cause I can't get a simple and straight forward answer. Thank you if you answer my question.

    Read the article

  • How to limit reverse SSH tunelling ports?

    - by funktku
    We have a public server which accepts SSH connections from multiple clients behind firewalls. Each of these clients create a Reverse SSH tunnel by using the ssh -R command from their web servers at port 80 to our public server. The destination port(at the client side) of the Reverse SSH Tunnel is 80 and the source port(at public server side) depends on the user. We are planning on maintaining a map of port addresses for each user. For example, client A would tunnel their web server at port 80 to our port 8000; client B from 80 to 8001; client C from 80 to 8002. Client A: ssh -R 8000:internal.webserver:80 clienta@publicserver Client B: ssh -R 8001:internal.webserver:80 clientb@publicserver Client C: ssh -R 8002:internal.webserver:80 clientc@publicserver Basically, what we are trying to do is bind each user with a port and not allow them to tunnel to any other ports. If we were using the forward tunneling feature of SSH with ssh -L, we could permit which port to be tunneled by using the permitopen=host:port configuration. However, there is no equivalent for reverse SSH tunnel. Is there a way of restricting reverse tunneling ports per user?

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • Firefox v15.0 about:newtab inaccessible via the back button

    - by Willem
    As of this morning I upgraded my Firefox to version 15.0. Or well, it did it for me. That's when my issue began. I've configured Firefox so that when I open up a new tab via Ctrl-T that it loads the about:newtab page which was released in version 14.x(?). I use this functionality extensively. I visit a page from the about:newtab page, read it, press the back button and pick another frequent website from the display I want to visit. However as of v15.0 the latter part is no longer functioning, after visiting a website from the about:newtab page the back button will not let me go back to the about:newtab page. Firefox acts as if that history entry was never there. I've tried searching the release notes to see if this change was intentional or a reported bug, but I have not found anything related so far. Is there anyway to (re)configure the new Firefox v15.0 to let me access the about:newtab page via a tab's history? EDIT: It appears that this only affects the initial 'visit' to the about:newtab page when opening a new tab, manually visiting about:newtab will register it as a history entry and allow it to be navigated to via the back and forward buttons. So I guess this changes my question to 'How can I make Firefox 15.0 treat the inital page from a newtab as a history entry'.

    Read the article

  • Trying to communicate between virtual servers on the same host through ipv6

    - by Daniele Testa
    I am running KVM on a host with 2 virtual servers. Each virtual server has a own bridge interface on the host VPS1 has br1 VPS2 has br2 Each virtual server has a own ipv4 and a ipv6. The virtual servers has no problem communicating with internet or with eachother through ipv4. However, with ipv6, they can only communicate with internet and NOT with eachother. The host can ping the 2 virtual servers without any problems, but they cannot ping eachother. iptables has been set to ACCEPT on all chains, so it is not the problem. VPS1 has ipv6 = 2a01:4f8:xxx:xxx::10 VPS2 has ipv6 = 2a01:4f8:xxx:xxx::5 the host has the following routes set: ip route add 2a01:4f8:xxx:xxx::10 dev br1 ip route add 2a01:4f8:xxx:xxx::5 dev br2 When I do a ping from VPS2 to VPS1, I see the following on the host: tcpdump -i br1 15:32:27.704404 IP6 2a01:4f8:xxx:xxx::10 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2a01:4f8:xxx:xxx::5, length 32 So it seems like the host is seeing the request coming from VPS1 on br1. But for some reason, it does not forward it to br2. Instead it is asking where the destination IP is through ipv6 multicast. Anyone has a clue what is going on? I find this very strange, as it is working fine with ipv4 with the exact same settings and routes.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >