Search Results

Search found 33029 results on 1322 pages for 'access vba'.

Page 519/1322 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • How do you change a Cisco ASA 5510 management interface?

    - by Sam Sanders
    I want to add a redundant interface to my Cisco ASA 5510. The management interface is currently using Ethernet0/1 (10.1.25.254/24) one of the interface I want to use for the redundant interfaces. So I wanted to setup Management0/0 as the new management interface. The other interface I want to use is Ethernet0/2 (10.1.0.254/24) for the redundant interface. The Ethernet0/3 (10.1.251.5/24) interface is not going to be part of the redundant interface. I gave the Management0/0 an IP address of 10.1.254.5, and was able to connect a win7 box to Management0/0 and use 10.1.254.5 as a gateway; and ping another address on the (10.1.251.0/24) network, but I can't ping the interface (10.1.254.5) itself. I also can't use ASDM/SSH to log onto the ASA at 10.1.254.5. I setup rules in Configuration Device Management Management Access ASDM/HTTPS/Telnet/SSH. That look like the original rules for the Ethernet0/1 interface. The last thing I can think to try would be to change the Configuration Device Management Management Access Management Interface. I'm a bit nervous about changing it, the description of it is a bit vague. What it's going to do if I change it? What is the correct way to change a management interface?

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Apache directory authorization bug (clicking cancel gives acces to partial content)

    - by s4uadmin
    I got a minor problem (as the site is not high priority) but still a very interesting one. I have an apache root domain wherein other sites live "/var/www/" And I have foo.example.com forwarding to "/var/www/foo-example" (wordpress site) The problem here is that when you go to foo.example.com you are prompted to enter credentials. If you hit cancel it gives you the access denied page. But when you go to the servers' direct IP (this gives you the default index page) and hit cancel when prompted for credentials it just keeps giving you the login screen, and after pressing cancel a few times more it gives (a perhaps cached) bare html part of the page. How do I prevent this from happening? Perhaps this is a bug... Even if I would block access to the root directory when going to the ip/foo-example it would still do this. And I want to keep all the directories within the www directory or at least all in the same. Thanks PS: here is my configuration: <VirtualHost *:80> DocumentRoot /var/www/wp-xxxxxxx/ ServerName beta.xxxxxxxxx.nl <Directory "/var/www/wp-xxxxxxxxx/"> Options +Indexes AuthName "xxxxxxxx Beta Site" AuthType Basic require valid-user Satisfy all AuthBasicProvider file AuthUserFile /var/www/wp-xxxxxxx/.htxxxxxxxxx order deny,allow allow from all </Directory> ServerAdmin [email protected] ServerAlias beta.xxxxxxx.nl </VirtualHost>

    Read the article

  • apache on Cent OS opening default page on https

    - by Asghar
    I am new to apache and SSL and configuration, i got verysign certificte to secure my site. i have public, private and ca_intermediate cert files. i have configured ssl.conf as below VirtualHost _default_:443> DocumentRoot /var/www/mydomain.com/web/ ServerName mydomain.com:443 ServerAlias www.mydomain.com # Use separate log files for the SSL virtual host; note that LogLevel # is not inherited from httpd.conf. ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on problem is that when i access www.mydoamin.com with "HTTP" it works fine, but when i access using "HTTPS" it just opens apache default page. but with green "HTTPS" means my certificates are installed correctly. How can i get rid of this situtaion. Thanks EDIT Output of apachectl -S -bash-3.2# apachectl -S [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:80 has no VirtualHosts [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: _default_:8081 localhost.localdomain (/etc/httpd/conf/sites-enabled/000-apps.vhost:10) *:8080 is a NameVirtualHost default server localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) port 8080 namevhost localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) *:443 is a NameVirtualHost default server mydomain.com (/etc/httpd/conf.d/ssl.conf:81) port 443 namevhost mydomain.com (/etc/httpd/conf.d/ssl.conf:81) *:80 is a NameVirtualHost default server app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost mydomain.com (/etc/httpd/conf/sites-enabled/100-mydomain.com.vhost:7) Syntax OK

    Read the article

  • Trouble with nginx and serving from multiple directories under the same domain

    - by Phase
    I have nginx setup to serve from /usr/share/nginx/html, and it does this fine. I also want to add it to serve from /home/user/public_html/map on the same domain. So: my.domain.com would get you the files in /usr/share/nginx/html my.domain.com/map would get you the files in /home/user/public_html/map With the below configuration (/etc/nginx/nginx.conf) it appears to be going to my.domain.com/map/map as noticed by this: 2011/03/12 09:50:26 [error] 2626#0: *254 "/home/user/public_html/map/map/index.html" is forbidden (13: Permission denied), client: <edited ip address>, server: _, request: "GET /map/ HTTP/1.1", host: "<edited>" I've tried a few things but I'm still not able to get it to cooperate, so any help would be greatly appreciated. ####################################################################### # # This is the main Nginx configuration file. # ####################################################################### #---------------------------------------------------------------------- # Main Module - directives that cover basic functionality #---------------------------------------------------------------------- user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; #---------------------------------------------------------------------- # Events Module #---------------------------------------------------------------------- events { worker_connections 1024; } #---------------------------------------------------------------------- # HTTP Core Module #---------------------------------------------------------------------- http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name _; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /map { root /home/user/public_html/map; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } include /etc/nginx/conf.d/*.conf; }

    Read the article

  • ProCurve ACL to prevent a subnet from leaving the switch

    - by kce
    I have a single HP ProCurve 2610 in a remote location that is connected in with the rest of the network via SHDSL. There are two Layer-3 networks on this segment. ACLs are setup to deny one subnet (192.0.2.0/24) from ever being able to leave the switch by virtue of being applied to port attached to the upstream connection. The other subnet should be permitted to freely leave the switch. Both subnets are on the same VLAN. Unfortunately SFlow very clearly show broadcast traffic from 192.0.2.0/24 on the upstream connection. ProCurve ACLs are not my strong suit but I feel like I'm missing something very simple here. ip access-list extended "Filter for Camera Network" deny ip 192.0.2.0 0.0.0.255 0.0.0.0 255.255.255.255 log permit ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 exit interface 24 name "DSL - UPLINK" access-group "Filter for Camera Network" in exit Unless I am mistaken traffic from 192.0.2.0/24 should be dropped as it crosses the uplink port (int 24) whereas all other traffic will be permited by the following default allow rule. What exactly am I missing here? EDIT: Firstly, why do you have two subnets contained in the same VLAN? Because that's how it was configured by a previous administrator and while it makes conceptual sense that a single subnet is "mapped" to a single VLAN there's no technical constraint that I am aware of that makes this have to be the case. Instead of filtering inbound traffic on your uplink, you should be filtering outbound traffic. The HP2600 series can only filter inbound traffic on interfaces. Should I change my filter to deny any to 192.0.2.0/24?

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • Has anyone had luck running 802.1x over ethernet using the stock Windows or other free supplicant?

    - by maxxpower
    I just wanted to see if anyone else has had luck implementing 802.1x over ethernet. So here's my basic setup. Switch sends out 3 eapol messages spaced out 5 seconds apart. if there's no response the machine gets put on a guest vlan with restricted access. If the machine is properly configured it will authenticate and be placed into a secure vlan. About 10% of my windows xp users are getting self assigned 169 addresses. I've used the Odyssey Access Client and it worked without a hitch. I'm using the setting to automatically use the users windows login to authenticate, but it's workign on 90% of the machines so I don't think that's the issue. Checking the logs on the dc it seems that the machines are trying to authenticate with computer credentials even though they are configured not to. I'm running Juniper switches with IAS for radius. I have radius configured for PEAP and MSvhapv2. Macs and linux boxes seem to have no issues authenticating. One last thing to add If I unplugging the ethernet cable and plug it back in usually resolves the issue, but I'd hardly call that acceptable for production. Kinda long winded and specific for a discussion, but just want to see if anyone else has had similar issues or experiences, or if anyone knows of a free XP supplicant that actually works with 802.1x over ethernet.

    Read the article

  • How to bypass vpn talking to VMWare Guest?

    - by marc esher
    Greetings. Network/VPN n00b question here. I'm running VMWare Workstation with a Guest Windows 2003 Server. It has SQL Server 2000 installed. The sole purpose for this Guest is to house SQL Server... it needn't have internet access or access to any other resources on the network other than the host. When launch Check Point VPN software, the host routes through the company network before it connects to the guest ... i.e. it's no longer a direct connection. I assume this is just how things are supposed to work. However, what's happening is that the connection between my host and the SQL Server instance on the guest intermittently drops. It's not consistent, and some databases on the server will be responsive while others aren't. It appears that the databases with the most traffic on the guest (the ones I'm hitting with load tests) are the ones that become intermittently unresponsive. This problem only manifests when VPN is on; when it's off, I can pound away on this database with no troubles. Thanks for any advice!

    Read the article

  • Does MySQL have some kind of DoS protection or per-user query limit?

    - by Ghostrider
    I'm a bit at a loss. I'm running a MySQL database that's roughly 1GB data in indices combined on a dedicated Linux server. DB version is '5.0.89-community'. Configuration is controlled via cPanel. PHP actually runs elsewhere on a shared hosting. IP addresses are static and don't change. Access from remote IP address is properly configured. Website gets around 10K hits per day with each hit generating a a database query. Some of these queries are expensive (~1 sec execution time). All is fine and well until at some point DB server starts refusing connections from the client, claiming that specific user can't access the server from that IP. Resetting the server will always fix the problem for a day or two and then the same thing happens. There are some other DBs on that server, some of which are hit pretty hard on occasion but constantnly. One of the apps maintains several persistent connections since it does couple of updates per minute. Though I don't think it's related. What's driving me mad is that I can't figure out why server would start refusing connections. There is nothing in the logs. This server is a hosted dedicated server so hosting company created the OS image and I didn't write or go over every line of configuration. I'd do it but I'm at a loss as to where start looking. Any advice is appreciated.

    Read the article

  • Keyboard doesn't function after Fedora 17 update

    - by mickburkejnr
    I updated my Fedora 16 installation to Fedora 17 on Saturday, and the update worked without reporting any errors. I carried on working on the machine in question and then switched the computer off. Last night I went back on the computer, I switched it on and got to the log in screen. At this point I tried to type in my password but the keyboard wouldn't work. I unplugged it (it's a PS/2 keyboard) and plugged it back in. The lights flashed for a split second but the keyboard still wouldn't work. I then plugged the keybaord in to a USB to PS/2 adapter, and the keyboard still wouldn't work. I restarted the computer and tried to access the BIOS and I was able to do so. So the keyboard doesn't seem to be faulty, it just doesn't work when Fedora boots in to the GUI interface. I did try to boot in to the "recovery mode" of Fedora, and the keyboard works here with no problem. As I still have access to Fedora via a terminal interface, is there anything I can do to fix the keyboard problem via the terminal without having to reinstall Fedora?

    Read the article

  • Troubleshoot IIS 6 and Intermittent SQL Server Connectivity Loss

    - by jpsnow
    I am not a System Administrator, but I am temporarily trying to fill that role. I have 2 Windows 2003 Servers. 1 server has my Microsoft SQL Database (SQL Server 2005) and the other is the web server with IIS6. Intermittently (once every month or 2) the web server loses connectivity to the database server for a period of roughly 10 minutes. It normally regains connectivity without intervention. The website is set up to send me an email when there is a problem connecting to the database. Within the email, I get the error code: [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]SQL Server does not exist or access denied. I am able to access both servers during the outage. I can go to my SQL server and open up SQL server without any issues. I have looked in the EventViewer and do not see anything. The last time that I had this issue, I stopped IIS6 and started it. After doing this, the issue was resolved. This leads me to believe that it is an issue with IIS and not SQL server. How can I begin troubleshooting?

    Read the article

  • Inactive users in windows server after some time according to first login instead of defining a solid expiration date

    - by smhnaji
    We want to give access to some Windows Server users so they can remotely have access to our server and download from a special folder of the server. The licenses we give to users, are time base. There should be 1 month, 2 month, ..., 1 year, ... licenses. CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after first login. The user might not need his account right when purchases the license. In another words: When a license of the user we create is purchased at Jan 1st, he should use the license until Feb 1st. No matter whether he really logs in or not. He cannot come Feb 5th and begin using his license because that has expired then. What I want is that when he comes at Feb 5th and begins using, the license update until March 5th. CLARIFICATION (Update after MDMarra's comment) Working environment is Windows Server 2012. By the word 'user', I mean Native Windows Server Users. Whenever a new person purchases a license with me, I create them manually using net user command like this: net user ali pass /add /expires:2013-12-25

    Read the article

  • Transfering Files to server IP and port

    - by Mason
    I need to transfer files from my local computer on windows 7 to a server running linux. I access the server with putty through ssh at a specific IPv4 address and port number. I've attempted using the pscp command from my local computer but was denied access by the server. "Fatal: Network error: Connection refused" c:>pscp test.csv userid@**IPv4_Addres***:Port# /path/destination_file_name. Either the server blocks all pscp attempts from unauthorized users (most likely my laptop included) or I used the command incorrectly. If you have experience using this command, where exactly will the file get transfered to, I'm assuming that the path destination starts at my home directory in the server. Also if you have any other alternative methods of transfering the files let me know. Update 1 I have also tried using WinSCP however I got permission denied for that as well, it looks like the server will not let me upload or save files. Solved I had a complete lapse of memory and forgot about sudo (spent too much time with scripts the last 2 months), so I was able to change the permissions to allow external editing. Thanks for all the help guys!

    Read the article

  • 500 Error when logining into subdomain using codeigniter

    - by itsdanprice
    I have a website that has been setup and working fine for ages. It's built using Code Igniter. It's run using .htaccess files to restrict access and hide urls. All fine. Until a couple of days ago when we try to access http://admin.dealersupport.co.uk we get a 500 error (this is the back end of the site, held in a seperate subdomain.) Nothing else has changed on the server. I have tried restoring from a back up from when I know it was working. The problem persists. The only thing I can think of is that we recently upgraded to Plesk 11.0.9 and since then we have been seeing some Apache instabilities. The only thing that is thrown up by the error logs is this: Wed Nov 21 08:40:17 2012] [error] [client 94.31.24.129] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/vhosts /dealersupport.co.uk/admin/index.pl, referer: http://admin.dealersupport.co.uk/login I have now added this to my .htaccess files Options +FollowSymLinks +SymLinksIfOwnerMatch RewriteEngine On And that seems to have eliminated that error from the error logs, but we are still getting a 500 error when we have logged into the backend. Can anyone help?

    Read the article

  • Fedora 15: em1 recently dissapeared and hostapd no longer serves internet to wirelessly connected devices

    - by Daniel K
    I have a laptop running hostapd, phpd, and mysql. This laptop uses an Ethernet connection to connect to the internet and acts as a wireless access point for my workplace's wifi devices. After installing some software and reconnecting my Ethernet elsewhere, my "em1" device is no longer present and wirelessly connected devices can no longer reach the internet. The software I recently installed is: pptp, pptpd, and updated some fedora libraries. I have also recently moved my desk and laptop to another location and thus had to reconnect the Ethernet elsewhere. Wifi devices no longer have access to the internet. Wirelessly connected devices are able to successfully log into the laptop, showing full strength, correct SSID, and uses the proper password. However, when I tried to connect to a site like google, the request times out. The device "em1" also no longer appears on my machine. Running: # ifup em1 will give me the following output: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Device em1 does not seem to be present, delaying initialization. And running: # dhclient em1 has the following output: Cannot find device "em1" When I run # dmesg|grep renamed, I get the following: renamed network interface eth0 to p4p1. I've tried to connect to the internet through p4p1 directly from the laptop and was successful. However, my wireless devices connected to my laptop are not able to connect to the internet. I have uninstalled pptp and pptpd using # yum erase ... but the problem still persists. To install pptp I used: # yum install pptp To install pptpd I did the following: # rpm -Uvh http://poptop.sourceforge.net/yum/stable/fc15/pptp-release-current.noarch.rpm # yum install pptpd To update my fedora libraries I used: # yum check-update # yum update EDIT: Running # route produces the following results: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.11.200.1 0.0.0.0 UG 0 0 0 p4p1 10.11.200.0 * 255.255.252.0 U 0 0 0 p4p1 172.16.100.0 * 255.255.255.0 U 0 0 0 wlan0

    Read the article

  • Apache caching with mod_headers mod_expires

    - by Aaron Moodie
    Hi, I'm working on homework for uni and was hoping someone could clarify something for me. I need to set up the following: Configure the response header "Cache-Control" to have a "max-age" value of 7 days since access for all image files Configure the response header "Cache-Control" to have a "max-age" value of 5 days since modification for all static HTML files. Configure the response header "Cache-Control" to have a value of "public" for all static HTML and image files. Configure the response header "Cache-Control" to have a value of "private" for all PHP files. My question is whether it is better to use a FilesMatch, or the mod_expires ExpiresByType to best achieve this? I've so far used the following: <FilesMatch "\.(gif|jpe?g|png)$"> ExpiresDefault "access plus 7 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(html)$"> ExpiresDefault "modification plus 5 days" Header set Cache-Control "public" </FilesMatch> <FilesMatch "\.(php)$"> Header set Cache-Control "private" </FilesMatch> Thanks.

    Read the article

  • setting up tracd behind mod_proxy?

    - by FilmJ
    I'm having trouble setting up mod_proxy and tracd. Seems almost all the search results for this problem take me to the built-in trac documentation page that mentions it as an option. I have several VirtualServers already running on the box in question, so running tracd on port 80 or 443 is not an option, but I do want to make my trac server accessible on this machine without exposing an additional port via the firewall. Making things even more complicated is that I have multiple trac repositories being served by the same instance of tracd, and so I want to set it up so: http://trac.abc.com is proxy'd to localhost:8000/projects/abcproject, and http://trac.def.com is proxy'd to localhost:8000/projects/defproject. Currently, the setup I have below results in 100% 403 errors. The server is running as www-data and the directory where all trac files are stored is owned by www-data, AND tracd (as show below) is running as www-data, so not sure where it's getting hung up. The relevant configuration on /var/apache2/sites-enabled/trac.abc.com: ProxyPass / http://localhost:8000/abcproject ProxyPassReverse / http://localhost:8000/abcproject The relevant configuration on /var/apache2/sites-enabled/trac.def.com: ProxyPass / http://localhost:8000/defproject ProxyPassReverse / http://localhost:8000/defproject The command used to instantiate tracd: tracd -a defproject,/var/www/vhosts/trac-common/users.htdigest,DEFProject -a abcproject,/var/www/vhosts/trac-common/users.htdigest,ABCProject -p 8000 -b localhost -e /var/www/vhosts/trac-common/projects If I access the site at http://localhost:8000/ everything works fine, but if I try to access via any of the proxy'd hosts I end up with 403 at every turn. I've used mod_proxy successfully as described above for other servers, such as couchdb, so maybe this has to do with the headers sent by tracd??

    Read the article

  • Internet Explorer not working after establishing a SSTP VPN connection

    - by Massimo
    I have a problem which is constantly appearing on each Windows 7 computer I'm using, whenever I establish a SSTP VPN connection to a ForeFront TMG 2010 firewall; it only happens with SSTP connections, not PPTP/L2TP ones. The problem appears only if using a proxy server for Internet access; it doesn't happen when directly accessing the Internet (with or without NAT). It doesn't seem to depend on a specific proxy software being used (I've seen it happening with various ones). The problem is: as soon as I start the VPN connection, Internet Explorer can't access anything anymore. I'm not using the VPN connection as a default gateway, and I can succesfully ping the proxy server after the VPN connection is esatablished (and even telnet to its 8080 TCP port), so this is definitely not a routing problem. Also, the problem is specifically related to Internet Explorer: while it seems not able to connect to any site, other programs (such as FireFox) have no problem accessing the Internet through the same proxy. This behaviour can be easily reproduced on any Windows 7 computer (the service pack and patch level doesn't seem to matter at all). Have IE connect through a proxy, establish a SSTP VPN connection... and IE will just not work anymore until the VPN connection is dropped.

    Read the article

  • Synchronising a remote folder with a local one.

    - by Workshop Alex
    I am using a network disk (that's connected to my router by USB) to store several data files. A simple .NET application that I've created is supposed to read and modify these data files. However, some security issues are preventing this application to access these files directly. (Actually, these have been built-in to my application on purpose since it's not going to support NAS disks.) Since this disk is shared with several computers, I just want to have a simple synchronisation method, which will copy the files to a local folder where3 my application can access them. And, once modified, it should send back the modified files to the NAS disk again. I have two options: 1) Build a second application to do my own synchronisation. 2) Find some build-in function inside Windows 7 Ultimate which can do this for me. Option 2 is preferred. Option 1 is something I can do easily, if need be. I don't need third-party tools. (Still, feel free to add some references to good tools, although I won't accept them as answers.) Basically, is this possible with Windows 7 and if so, how?

    Read the article

  • Host multiple domains with Apache on a VPS

    - by Kunal
    Hi I have recently bought a windows VPS and installed apache, mysql using xampp. All services are running fine. I can access the hosted site using the IP of VPS but what I need to do is host multiple domains on that server. Actually my requirement was to use htaccess on a php based site but the site had to access data from ms sql server as well. So needed to enable php_mssql.dll in the php.ini which no shared hosting was supporting. Had to go for this VPS. Plesk was installed by default but as htaccess wont work with IIS, I had to stop IIS service and install apache there. Now all is well, but I need to find a way to host multiple domains there. When I bought the VPS they hosting people sent me the dedicated IP of the server and also I have the 2 name servers required to host domains. What is the next step? Exactly which file I need to modify to get things done? Please help! Any kind of help is much appreciated. Thanks in advance for all of your kind help and time.

    Read the article

  • Windows desktop virutalization instead of replacing work stations

    - by Chris Marisic
    I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine. It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server. Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this. How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this? What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop. Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation. Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >