Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 208/467 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

  • Problems bringing up a second virtual network interface

    - by tubaguy50035
    I'm having issues adding a second IP address to one interface. Below is my /etc/networking/interfaces # The loopback network interface auto lo iface lo inet loopback #eth0 is our main IP address auto eth0 iface eth0 inet static address 198.58.103.* netmask 255.255.255.0 gateway 198.58.103.1 #eth0:0 is our private address auto eth0:0 iface eth0:0 inet static address 192.168.129.134 netmask 255.255.128.0 #eth0:1 is for www.site.com auto eth0:1 iface eth0:1 inet static address 198.58.104.* netmask 255.255.255.0 gateway 198.58.104.1 When I run /etc/init.d/networking restart, I get a fail error about bringing up eth0:1: RTNETLINK answers: File exists Failed to bring up eth0:1. Any reason this would be? I didn't have any problems with I first set up eth0 and eth0:0.

    Read the article

  • How to block access to files in the current directory with .htaccess

    - by kfir
    I have a few private files in a public folder and I want to block access to them. For example lets say I have the following files tree: DictA FileA FileA FileB FileC I want to block access to FileB and FileA in the current directory and allow access to the FileA in the DictA directory. The first thing that came to mind was to use the FilesMatch directive as follows: <FilesMatch "^(?:FileA)|(?:FileB)$"> Deny from all </FilesMatch> The problem here is that FileA inside DictA will also be blocked, which is not what I wanted. I could override that by adding another .htaccess file to DictA but I would like to know if there is a solution which wont involve that. P.S: I can't move the private files to a separate folder.

    Read the article

  • Excel SUM From Different Sheets IF Date Found

    - by user329005
    I have a workbook with separate sheets for each product (about 20 sheets, adding more on a regular basis). Each product is only available for a certain time frame, and has daily sales data recorded on that product's sheet. I want an overall snapshot across all products from any given date to be consolidated on a new sheet. This would sum from a particular column on each of the other sheets if a corresponding date exists. I have a moderately passable function right now that has a separate VLOOKUP for each product sheet like SUM(IF(ISERROR(VLOOKUP(DATECELL,SHEETNAME!ARRAY,COLUMN... next VLOOKUP, next VLOOKUP etc., but it's incredibly cumbersome to update each function when a new product is added. I'm thinking there's a much easier way utilizing a named group (sheet names), SUMIF, VLOOKUP etc. Then when a new product sheet is added, I can simply add the sheet name to the named group rather than editing all the functions. Any help would be much appreciated!

    Read the article

  • Why are graphics cards upside-down?

    - by gbjbaanb
    This is something that has always bugged me - when I install a card into a desktop (ie mini tower) case, the fan is always facing down. Surely, making the card so the components and fan is on the top would help a lot with cooling, allowing those whiney fans to spin a little slower. I know some card manufacturers tried to mitigate this by adding heat pipes and big heatsinks on the back of the card.. but they still put the bits on the same way as everyone else! So, does anyone know why they're all upside-down?

    Read the article

  • I get a 403 when requesting a JS file from CloudFront

    - by Roland
    This is new to me so please excuse me if I have no idea what I'm talking about (: I'm trying to set up my own CDN with CloudFront and S3 through a subdomain by adding a CNAME to that subdomain to point to the CloudFront. It seems like I get a 403 when trying to load the file, this is the original s3 link : https://s3.amazonaws.com/chaoscod3r_aws_cdn/libs/polyfills/json3_polyfill.js ; which seems to be working after setting the permission to everyone to open / download. But when trying to use the subdomain to request the file : http://cdn.chaoscod3r.com/libs/polyfills/json3_polyfill.js ; it seems like I get that 403. Could anyone help me out with this one ?

    Read the article

  • Google Chrome giving error 138

    - by gsingh2011
    Google Chrome randomly stopped working one day and is giving me this error: Google Chrome is having trouble accessing the network. This may be because your firewall or antivirus software wrongly thinks that Google Chrome is an intruder on your computer and is blocking it from connecting to the Internet. Here are some suggestions: Add Google Chrome as a permitted programme in your firewall or antivirus software's settings. If it is already a permitted programme, try deleting it from the list of permitted programmes and adding it again. Error 138 (net::ERR_NETWORK_ACCESS_DENIED): Unable to access the network. I didn't make any changes to my firewall settings between the time it was working and when it wasn't working. I'm using the default Windows Firewall. I added Chrome to the allowed programs and restarted, but that didn't fix the error. I even reinstalled Chrome completely and that didn't work either. Any help would be appreciated. EDIT: I forgot to mention that Firefox and IE9 work fine.

    Read the article

  • IIS7 folder permissions

    - by Eanna
    I build a basic WCF service that I now want to host in IIS7 under Windows Server 2008 R2. I added the service as an application under the default web site but whenever i try to run the application I get the following error: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Config Error - Cannot read configuration file due to insufficient permissions The only way I can get this service working is if i choose to "connect as" the server Administrator when adding the service. the "application user (pass-through authentication)" option does not seem to work. Could anyone help me out, I've just started using IIS7 and have no idea what to do... Thanks

    Read the article

  • Ping6 fail on linux

    - by michelemarcon
    I have 2 linux box configured with IPv4. I have tried adding IPv6 to them. I have issued this commands on box1: ip -6 addr add fd32:2d7f:f3c1::1/48 dev eth0 And I get this: inet6 addr: fd32:2d7f:f3c1::1/48 Scope:Global Then I have issued this command on box2: ip -6 addr add fd32:2d7f:f3c2::1/48 dev eth0 Back on box1 (command/response): ping6 fd32:2d7f:f3c1::1 is alive! ping6 fd32:2d7f:f3c2::1 ping6: sendto: Network is unreachable Why doesn't box1 ping box2 (of course, also box2 can't ping box1)?

    Read the article

  • How to enable systemd instantiated service with puppet?

    - by Richard Pena
    I've got the following puppet service: service { "[email protected]": provider => systemd, ensure => running, enable => true, } When I try to apply this configuration on my client, it throws the following error: err: /Stage[main]//Node[puppetclient]/Service[[email protected]]/enable: change from false to true failed: Could not enable [email protected]: The service is running fine and I can make sure it's started on system boot by adding a symlink to getty.target.wants: ln -s /lib/systemd/system/[email protected] /etc/systemd/system/getty.target.wants/[email protected] Of source, I could go ahead and remove "enable = true" from the service definition and include a the symlink manually in the puppet configuration, but shouldn't puppet take care of this? Am I doing something terribly wrong?

    Read the article

  • nginx: override global ssl directives for specific servers

    - by alkar
    In my configuration I have placed the ssl_* directives inside the http block and have been using a wildcard certificate certified by a custom CA without any problems. However, I now want to use a new certificate for a new subdomain (a server), that has been certified by a recognized CA. Let's say the TLD is blah.org. I want my custom certificate with CN *.blah.org to be used on all domains except for new.blah.org that will use its own certificate/key pair of files with CN new.blah.org. How would one do that? Adding new ssl_* directives inside the server block doesn't seem to override the global settings.

    Read the article

  • Set 802.1Q tagged port on VLAN1 on Dell PowerConnect switch

    - by Javier
    I'm having big troubles when adding this Dell switch to my network. Here we use several VLANs to segment traffic. All switches (3com and DLink mostly) have configured the same VLANs, most ports are 'untagged' and belong to a single VLAN, except for the ports used to join together the switches (in a star topology), these ports belong to all VLANs and use 802.1Q tags. So far, it works really well. But on this new switch (a Dell PowerConnect 5448), the settings are very different (and confusing). I have configured the same VLANs, an the uplink ports are set in 'general' mode (supposed to be fully 802.1Q compliant), I can set the VLAN membership as 'T' on these ports for all VLANs except VLAN 1. It always stay as 'U' on VLAN 1. Any ideas?

    Read the article

  • Cannot SSH anymore, what went wrong?

    - by lbwtz2
    I use to ssh to a remote server (no rsa-key, just password). Now the server do not accept the connection any more and throw me this error: ssh_exchange_identification: Connection closed by remote host While I can google a little to find a fix I can't figure out what went wrong since I haven't touched anything on the machine since last login. Can you help me find the cause? EDIT: Inspecting the logs I've found these: /var/auth.log /var/log/auth.log:Dec 26 16:40:32 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 26 16:41:05 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 26 16:43:47 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 27 03:20:06 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 27 16:15:02 vps sshd[15567]: error: fork: Cannot allocate memory And in the same span-time I've also found a lot of these: /var/log/auth.log:Dec 26 13:00:01 vps CRON[1716]: PAM unable to dlopen(/lib/security/pam_unix.so): libcrypt.so.1: cannot map zero-fill pages: Cannot allocate memory /var/log/auth.log:Dec 26 13:00:01 vps CRON[1716]: PAM adding faulty module: /lib/security/pam_unix.so What are these?

    Read the article

  • Delegation Permissions to admins in Active Directory/Taskpads

    - by user1569537
    I am trying to provide taskpads to few admins to operate on few tasks delegated to them at OU level.I ran into the following problem; lets say i delegated access to the admin on OU X and which is ability to modify groups such as sample group X1 , he must be able to add any users from OU X to the group X1. The issue here is while testing i found out the admin can do the above but also can add a user Y1 from the OU Y(which he doesnt have delegated permissions) to the group X1.What am i missing? how to restrict admin from adding users out of OU to the groups he has modify access to? Please ask me if any more details/clarification required.

    Read the article

  • Apache user owns git project root, with git-http-backend setup, but still having permissions problems

    - by Luke
    I've setup git-http-backend on my vps server (CentOS), under one of its users. The apache user owns the git project root directory - /home/theuser/git/, as below: drwxrwxr-x 3 apache apache The apache user also owns everything inside that directory. But I'm still getting the following error in git when trying to push: error: unpack failed: unpack-objects abnormal exit The apache error log shows the following error: error: insufficient permission for adding an object to repository database ./objects I've tried every combination of user permissions and enabled read/write access, but not getting anywhere. Should the git user own this folder? Can someone explain exactly what user should own this folder, or what steps I might take to fix this problem?

    Read the article

  • Upgrading HP DL185 G5 8LFF, is using a Dell J1520 4-Drop SATA Adapter possible?

    - by jpreed00
    The HP DL185 G5 8LFF model supports 8 3.5" drives and 1 optical drive. However, instead of the optical drive, I'd like to have 2x 2.5" drives instead. The problem is that the PSU has no more SATA power cables (even though the motherboard has 4 additional SATA data ports). The PSU does have a free 10-pin connector and it looks like the J1520 cable from Dell would fit the bill. Link to cable description Does anyone have any experience using these cables? Are they safe? Any other ideas for adding the disks to the server if I don't use the cable? Thanks!

    Read the article

  • How two use 2 subnets on one network

    - by BGuy2010
    I have some servers at a colocation. They've given us an IP range,subnet,and gateway. Now we have run out of IP's and they've given us a new range of IP's but with a different subnet and gateway. We have a Juniper NetScreen firewall and a load balancer, and I am not sure how to proceed in order to be able to use these new IPS that are on a different subnet. Do I need to setup a new VLAN? on our firewall? I tried adding one of the new IP's on one of our servers, with the new subnet and gateway. I could ping the alternate gateway, but could not ping the assigned IP from outside or from inside.

    Read the article

  • Searching Netapp Network Share in Windows 7

    - by user121270
    Windows 7 famously does not do what its predecessor, Windows XP, did very well, index and search network drives! Sometimes, the logic of MS isd absolutely baffling. That siad, I am trying to find some solution to the issue, which is made more complicated by the fact that we are using a Netapp FAS 2020 as a CIFS fileserver. I know some of the solutions to the Windows 7 search index issue revolve around having a Search Service installed on a Windows 2008 server and then adding that server sahre to the library on the Windows 7 workstation. Is it possible to accomplish this in any way with a CIFS share on a Netapp filer?

    Read the article

  • Proxy Server suggestions

    - by Jon Menefee
    Here is the question I have that hopefully is not too general of a question. I have a network with approximately 25 PC's, 3 servers and 25 IP cameras. I have a firewall already on the network and it works fine for what I need, but my client is asking me if there is a way to put a Proxy server on the network to monitor where his employees are going when they surf the Internet. He is not wanting to block them (at least not thru the Proxy server), but he wants to make sure that they arent going to sites that would compromise the networked PCs. I have looked at TMG and it is a little more than what I want. I hesitate adding another firewall to the system because of the security cameras that are presently on the network (IP Cameras). I just want to put a policy in AD that would make certain Users (or Computers) use a Proxy server. Any suggestions on a good proxy server are welcome. Thank you

    Read the article

  • Configure Postini and emailreg.org

    - by crn
    One of our companies uses Postini services as our spam filtering service. Unfortunately, the company has been tagged as a spammer and we're trying to use emailreg.org to whitelist us. Emailreg.org wants us to add a CNAME which points to their domain (emaireg.org), while Postini has us add MX records (such as domainname.s7a1.psmtp.com. Here are my questions: 1. Can adding Emaireg's CNAME cause either Postini to not work or our emails to be lost? 2. Which is order of execution (do the email go to Postini and upon their return to EmailReg or is it the other way around)? 3. Is there anything of which I should be aware when using such a setup? Thanks, in advance, for all your help!

    Read the article

  • How to reset Chrome's search engines to default?

    - by AndreKR
    I accidentally deleted Google as the default search engine from Chrome. This also caused the "Search Google for this image" item in the context menu of images to disappear. I tried to add it back by adding a search engine with these settings, which I copied from another machine: Name: Google Keyword: google.com URL: {google:baseURL}search?q=%s&{google:RLZ}{google:originalQueryForSuggestion}{google:assistedQueryStats}{google:searchFieldtrialParameter}{google:searchClient}{google:sourceId}{google:instantExtendedEnabledParameter}{google:omniboxStartMarginParameter}ie={inputEncoding} Unfortunately this does not bring back the "Search Google for this image" menu item, so there must be more to this entry than just Name, Keyword and URL. I don't mind deleting all search engines and resetting the list to its default state, but how can I do this?

    Read the article

  • phpmyadmin error #2002 cannot connect to mysql server

    - by Joe
    so i am getting this error when trying to connect to my mysql server. i have reinstalled MYSQL and php several times and tried a slew of command line work from info around the web.mysql is running and i know that my mysql.sock exists and is located in ~/private/tmp/ and also in ~/tmp/. i also have plenty of hard drive space. i have installed and setup phpmyadmin correctly only adding a password to 'Password for config auth'. AND i have connected to the server via Sequel Pro. so my question is what the heck is going on that i can't connect to the server via phpmyadmin? any guesses? also i'm on a 64-bit intel mac running snow leopard

    Read the article

  • Installing DB2 in Windows 2003 Cluster

    - by radoslawc
    How can I install db2 with already created domain user? Or maybe there is no need, just install with local user db2admin and then create instance using existing domain user? This is how it looks I've got win 2003 cluster single quorum, I have to move db2 database to another cluster in same domain. I've added resources to new cluster like in IBM redbook, and I have problem adding db2 resource. I've got service account DOMAIN\serviceaccount and DOMAIN\db2_instance owner, to install db2 do I have to log in as db2_instance_owner? Thanks in advance

    Read the article

  • Interested in scp recipe for sftp [closed]

    - by GJZ
    You wrote in a reply this Blockquote The problem is that sftp runs as the user's id -- first, the sftp client ssh's into the target host as the given user, then runs sftp-server. Since sftp-server is running as a regular user, it has no way to "give away" a file (change owner of a file). However, if you are able to use scp, and assign a key pair to each user, you can get around this. This involves adding a user's key to root's ~/.ssh/authorized_keys file, with a "command=" parameter to force it to run a script that sanitizes and alters the arguments of the server-side scp program. I've used this technique before to set up an anonymous scp dropbox that allowed anyone to submit a file, and ensure that no one could retrieve submitted files and also prevent overwrites. If you are open to this technique, let me know and I'll update this post with a quick recipe. We are interested in this scp quick recipe for our community services file sharing. Best Regards, Gert Jan Zeilstra

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >