Something make all browsers on any operating system or computer of my home network to open randomly some web pages and ask to install a microsoft protection of some kind, what could do that ???
How can I configure Apache to ProxyBlock content based on something dynamic such as time-of-day or max-use. Basicly I'm curious about the scriptability of Apache. My web-stumbling leads me to believe I can combine mod-proxy and mod-perl in interesting ways to do dynamic filtering. But I'm pretty lost. What are some general instructions, tutorials, books, technologies to begin scripting Apache (or any suitable proxy).
At work, on certain projects I have to manage a lot of images. Most of the time PNG files work the best for what I'm doing.
With such a huge amount of images, I've tried using PNG compression with PNG Gauntlet but sometimes the file doesn't really change and sometimes PNG Gauntlet reports it would've made the filesize bigger!
Am I just maxing out the compression or is there something more I can do?
Its possible to rename files all at once. But its not possible to change the extension of selected files all at once.
If windows doesn't support this facility .. then is there any batch script or something to work out with ???
When I load a page which can't be connected for some reasons, before Firefox shows the error "The connection has timed out", I will wait for about three minutes. I want change the Firefox connection timeout value to 20 seconds. I found something here http://stackoverflow.com/questions/1342310/where-can-i-find-the-default-timeout-settings-for-all-browsers, but it doesn't mention how to set the connection timeout value, please help!
Is there still a use for this key in modern operating systems? I know back in the days of the rapidfire dir /s on ten thousand files in DOS 5.5 this key was indispensable, but is it needed anymore? If not, can i remap it to do something else? If so, what?
I use tunnelblick and OpenVPN on the mac, but when I choose to disconnect from the VPN I lose all network connectivity and have to reboot to get back online.
Is there a way to get a new IP or something, without having to reboot?
A slightly open question regarding best practices, I can find lots of functional guides for git but not much info about standard ordering of operations etc:
Whats the standard/nice way of working with remote repositories, specifically for making a change and taking it all the way back to the remote master. Can someone provide a step-by-step list of procedures they normally follow when doing this. i.e. something like:
1) clone repo
2) create new local branch of head
3) make changes locally and commit to local branch
4) ...
I would like to only allow one IP to use up to, say 1GB, of traffic per day, and if that limit is exceeded, all requests from that IP are then dropped until the next day. However, a more simple solution where the connection is dropped after a certain amount of requests would suffice.
Is there already some sort of module that can do this? Or perhaps I can achieve this through something like iptables?
Thanks
I'm trying to set up an OpenVPN "chain", similar to what is described here.
I have two separate networks, A and B. Each network has an OpenVPN server using a standard "road warrior" or "client/server" approach. A client can connect to either one for access to the hosts/services on that respective network.
But server A and B are also connected to each other. The servers on each network have a "site-to-site" connection between the two.
What I'm trying to accomplish, is the ability to connect to network A as a client, and then make connections with hosts on network B. I'm using tun/routing for all of the VPN connections. The "chain" looks something like this:
[Client] --- [Server A] --- [Server A] --- [Server B] --- [Server B] --- [Host B]
(tun0) (tun0) (tun1) (tun0) (eth0) (eth0)
The whole idea is that server A should route traffic destined to network B through the "site-to-site" VPN set up on tun1 when a client from tun0 tries to connect.
I did this simply by setting up two connection profiles on server A. One profile is a standard server config running on tun0, defining a virtual client network, IP address pool, pushing routes, etc. The other is a client connection to Server B running on tun1. With ip_forwarding enabled, I then simply added a "push route" to the clients advertising a route to network B.
On server A, this seems to work when I look at tcpdump output. If I connect as a client, and then ping a host on network B, I can see the traffic getting passed from tun0 to tun1 on Server A:
tcpdump -nSi tun1 icmp
The weird thing is that I don't see Server B receiving that traffic through the tunnel. It's as if Server A is sending it through the site-to-site connection like it should, but server B is completely ignoring it. When I look for the traffic on Server B, it simply isn't there.
A ping from Server A -- Host B works fine. But a ping from a client connected to Server A to host B does not.
I'm wondering if Server B is ignoring the traffic because the source IP does not match the client IP pool that it hands out to clients? Does anyone know if I need to do something on Server B in order for it to see the traffic?
This is a complicated problem to explain, so thanks if you stuck with me this far.
I switch between three languages input methods frequently, sometimes in the same typing session. The default shortcut on a Macintosh seems to be to set keyboard shortcuts for previous/next language (I hit opt-cmd-space to go to previous language) and if you're more than bi-lingual you have to cycle until you find the one for you.
The ideal would be something like hitting fn-e for English, fn-j for Japanese and fn-g for German, but anything better than the current state would be a great improvement.
I am searching for a list of faults that may occur in a traditional IP network.
To give you a better understanding of what I am looking for:
For an MPLS-IP network the set of faults may be something as given in this cisco site.
I want pointers to such kind of faults for a traditional IP network.
Individual suggestions from you are welcome, but in doing so, please also provide a link to
the official site from which you came with those failure scenarios.
I have an Amazon EC2 + SSL just installed on GoDaddy.
I have successfully managed to install it and get the green https on the main domain https://www.example.com
however it doesn't any https://www.example.com/something but the route works under http://www.example.com
I am using an .htacess file for some rewrite.
Options -MultiViews
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
the Ec2 instance is ubuntu if that helps in anyway.
I'm connecting to another computer via RDP. I would like to click on links inside my RDP session and have the links open in a browser on my client computer. It feels like I could install some application on both ends and have them communicate over TCP and proxy the URL opening.
Does something like this exist?
I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right.
I would run:
ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php
Here are the results:
Concurrency Level: 20
Time taken for tests: 48.197 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 392111200 bytes
HTML transferred: 392047600 bytes
Requests per second: 4.15 [#/sec] (mean)
Time per request: 4819.723 [ms] (mean)
Time per request: 240.986 [ms] (mean, across all concurrent requests)
Transfer rate: 7944.88 [Kbytes/sec] received
While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl.
Here's what I think I'm doing right on the code side:
In Chrome browser uncached pages typically load in 800-1000ms, and
cached pages load in 300-500ms. Not stunning, but not terrible either.
Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck.
I have APC on.
I am using Memcached to serve the view cache and other site caches.
xhprof code profiler shows that cached pages take up 10MB-40MB in
memory and 100ms - 1000ms in wall time.
Pages that would be the worst offenders would look something like this in xhprof:
Total Incl. Wall Time (microsec): 330,143 microsecs
Total Incl. CPU (microsecs): 320,019 microsecs
Total Incl. MemUse (bytes): 36,786,192 bytes
Total Incl. PeakMemUse (bytes): 46,667,008 bytes
Number of Function Calls: 5,195
My Apache config:
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 3
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 120
MaxRequestsPerChild 1000
</IfModule>
Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.
I have a LaCie 2big Network that currently has 2 500GB drives in it (mirror).
I'd like to upgrade the drives to 1TB each using something like this
I know that Lacie sells a 1TB drive designed for the 2big Network but it would seem to me that these drives are standard drives with the Lacie holder included.
Do I need to use their drives or can I get my own? (Their customer support pushes me towards their drives) I'm assuming the device can format the drives for me when I add them in.
I clicked on something and now all my windows have a black boundary around them whenever I focus on it. This happens on menu bar items as well when in focus. How do I remove it?
In XP professional, it's REALLY easy to change folder icons: there's a button in the customize menu. However, I can't seem to set a folder icon in home edition: the button isn't there. Additionally, I can't seem to get a simple desktop.ini to do the trick either:
[.ShellClassInfo]
IconFile=icon.ico
IconIndex=0
Is there something I've missed?
My organisation has Exchange 2007 e-mail server, and now, we want to host e-mail service for other organisation (neworg.com)
I added new Authoritative Active Domain neworg.com, but when adding a new mailbox, there is no option to chose new SMTP domain name neworg.com, and I can't add new user with SMTP domain [email protected].
Probably I misunderstood something while reading posts on Internet, but can someone help please?
how to limit internet traffic on a router based on a LAN IP.
so that for example on a 10mb/s internet connection I can have a IP Camera with a dedicated 1mb/s, 2 computers with 3mb/s, and 2 computers with 6mb/s.
as far as I know it's called something like traffic-shaping...
I'm really not sure how this all is called, so please show me or point me at some guide for dummies. :)
I have centos with cpanel whm.
I have phpmyadmin installed on whm.
i want to know where is the folder where phpmyadmin is installed
something like
/var/www/whm/phpmyadmin
I'm stuck between ´eVGA GTX 550 Ti 1GB Super OC´ and ´eVGA GTX460 SE 1GB SuperClocked OC´. I know that 460 is better in performance, but does the 550 Ti has any newer feature, which makes it better?
I'll mostly use it for HD movies and HD gaming (skyrim, la noire). Would appreciate opinions about which card is better, and if you have one of these, please provide something useful information.
Thanks.
I ordered a Dedicated Root Server on Hetzner.
I have WHM and cPanel and now I want to make front
site for histing company where users can
control their accounts, nameservers of registered domain,
register new domains and new hosting accounts.
My question is:
Do I need to buy WHMCS for that or something else.
I need to put client zone on front site and I want to find the best solution for that.
Sorry if this is not topic for this site,
but I just don't know where can I ask this question :(
I am going to be running a small site (100 hits a week maybe) and I am looking for a quick and easy way to share login information between the main website, a control panel (webmin, cpanel, or something), and the forum.
One login needed to access any of the three. The website won't have use for the login, per say. But it will display "logged in" when you are on the website.
Any custom solutions, any thoughts, logic, examples?