Search Results

Search found 20755 results on 831 pages for 'custom map'.

Page 680/831 | < Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >

  • How can use mod_rewrite to redirect a multiple specific URLs containing multiple query strings?

    - by Derek
    Hi there folks, we recently migrated a site from a custom CMS to drupal. In an effort to preserve some links that our users bookmarked (we have about 120 redirects) we would like to forward the original URLs to a new URL. I have been searching for a couple days, but can't seem to find anything simple to what I need. We have existing URLS that contain one or more query strings, for example: /article.php?issue_id=12&article_id=275 and we would like to forward to the new location: http://foobar.edu/content/super-happy-fun-article I started using: RewriteEngine On RewriteRule ^/article\.php?issue_id=12&article_id=275$ http://foobar.edu/content/super-happy-fun-article [R=301,L] This, however, does not work. A simple RewriteRule works: RewriteRule ^test\.php$ index.php It is unclear to me how I need to use {QUERY_STRING} with multiple Basically it's 120 simple redirects that go from one existing URL to a new one. I don't need ranges [0-9], because there is no sequential order to existing URLs. Perhaps I can do what I need with RewriteMap and a simple text file that contains a line like this: index.php?issue_id=12&articleType_section=0&articleType_id=65 http://foobar.edu/category/fall-2008 If anyone has any idea on using mod_rewrite to accomplish this or if there is a better, or more simple mod, I am open to that as well. Thanks!

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

  • Is it possible to to create a live linux iso containing a win xp virtual machine?

    - by mark
    I would like to have a Linux live system that contains a Windows xp virtual machine. This would be run from a bootable USB flash drive. My attempts so far have been unsuccessful. I created a Lubuntu 12.04 virtual machine with VMware. I updated and configured it to my needs, and installed Virtualbox. I then created a Windows xp vm with Virtualbox in the Lubuntu vm. I tested everything and everything worked, including USB devices. I installed Remastersys in the Lubuntu vm, copied the xp vm folder to the /etc/skel folder then created the custom iso with remastersys. I burned the iso and tested it on a laptop. It worked flawlessly. All programs and wireless networking worked. My problem was the xp vm. Virtualbox started fine but would not run the vm. I have the following error: Result Code: NS_ERROR_FAILURE (0x80004005) Component: VirtualBox Interface: IVirtualBox {c28be65f-1a8f-43b4-81f1-eb60cb516e66}. I ran remastersys again changing the permissions on the skel folder to R W for everyone. I also logged into Lubuntu as root and ran remastersys again. Each iso I created worked fine but would not start the xp vm inside. The last attempt virtualbox gave me an access error stating it can not access the virtual disk. Is what I want to do possible? In theory I don't see why it would not work. Is it a permissions issue? Should I create the iso then add the xp vm after by editing the iso by hand? Using a vm and not real hardware as a build machine a problem? Any ideas? keep any responses in laymens terms. I am still a Linux novice.

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • Server Intermittently Inaccessible Externally (but Accessible Internally Continuously)

    - by nicorellius
    I have a CRM on a server on a network. We have a static IP and another server outward facing. We use port-forwarding to map to the CRM, so that when you go to the IP or the FQDN, you get to the CRM: xxx.xxx.xxx.xxx crm.example.com Internally, we can access the CRM by going to crm or crm.example.com Lately, I've been noticing that accessing the server from outside the network times out or gives 503, bad gateway. During that time, I can also SSH (different port, so this works) into the outward facing computer and access the server just fine. I have a robot monitoring the site and indeed via HTTP monitoring the site is going down periodically. I looked through the Apache server access and error logs and nothing stuck out at me so I'm a bit confused as to what could be going on. I also searched the access logs for 503 and found nothing. When I run tracert from outside the network, it appears the packets basically make it through the wider area servers (Comcast city and county servers) and end up dropping at the CRM server's front step. I'm tempted to replace the server because it is older and underpowered but it would be nice to know what is going on. Any ideas what to do next?

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Why is my computer randomly restarting? How can I fix it?

    - by kinglime
    I have a custom built desktop computer that I've been using since about a year ago. The main specs are: ASUS P8Z68-V LE Motherboard Intel Core i7 2600k Corsair Vengeance 16GB Ram ASUS ENGTX570 GPU Corsair TX650M PSU I was running an overclock of 4.4ghz for my cpu (I have a Hyper212 EVO) and 1600mhz for my RAM but have it currently turned off due to my issues. I am also currently running Windows 8 but this problem occured in Windows 7 too. Basically my issue is that seemingly randomly, with no pattern my PC will reset itself and ASUS Anti-Surge will alert me that something went wrong. This issue is not related to system stress. I can run it fine for an hour maxxed out on Prime95 then later I can be watching a mere YouTube video when it will randomly reset. This has been occurring probably for the last two weeks and it seems to be getting worse. I believe this might be related to the power supply but when I monitor it in the bios and in Windows it appears to be putting out the proper voltages. Also possibly related or not but my Nvidia drivers frequently temporarily fail and then warn me of some kind of kernel error? If I have to buy a new power supply that is what I'll do but I want to make damn sure that's the only issue at hand. Thank you everyone in advance, please help me diagnose the issue and tell me what I can do to fix it. If you need any additional info about my setup please ask me.

    Read the article

  • how is the the linux console displayed to the user and how does the user go about changing the conso

    - by Chris
    I've been searching for the last two day on trying to understand how the console displays itself to the user and how to change the console settings. I've had some luck along the way but nothing that I've found has giving me a real clear explanation of how the console is displayed or how to change or control it's display settings. Some examples that of what I'm looking for are as follows: How is the console displayed on the screen? I know with X11 it uses your graphics card driver to display graphics to the screen, but how is the consoles text mode handled? Could some one ether explain this to me or point me to an in-depth overview of it all? Is it possible to have multi-head support in console mode with separate tty's on each screen? If so how would I go about setting this up? How would you go about changing the size of the console display from the default 80x25 to a custom size? I'm testing anything I find on a debian testing build, which is just the minimal base install on a virtual box. In time I will be using this information to setup my main system which is multi-head with 3 monitors. I would like to be able to support all three displays in console mode if possible.

    Read the article

  • (updated) Subfolder needs whitelist and standard redirect for all others

    - by Superstrong
    How can I allow access to the foo.html files in the .com/song/private/ subfolder for: a logged-in Wordpress user; or any referral domains (including subfolders) I add; or any URL on our own domain from the com/song/private folder; For all others, the user should be redirected to the corresponding public version of the Post, which is the same html filename and structured .com/song/foo.html. (The private versions uses a different template with different custom fields for each Post.) Update: Here's what I have so far: <Limit GET POST> order deny,allow deny from all allow from domain.com/song/private allow from otherdomain.com </Limit> RewriteRule ^(.*)$ ../$ [NC,L] More: Will that last rewrite rule take people back to the public version, from com/song/private/foo.html to com/song/foo.html? I found the following rule for detecting Wordpress logged-in status, but what do I put aferward with a RewriteRule, and will it work anyway? (If not, is there an alternative?) RewriteCond %{HTTP_COOKIE} !^.*wordpress_logged_in_.*$ N.B. I have added code to my root .htaccess allowing me to insert additional .htaccess files in other subfolders as needed. Copied from Stack Overflow, where they suggested I ask here.

    Read the article

  • Window too big to fit the screen!

    - by syockit
    I'm using Windows 7 on a 8.9' monitor with 1280x768 screen resolution. Using the might of arithmetics, I'm able to determine that my dpi (actually ppi) should be 167. Win7 is really helpful in that it doesn't have to restart to apply new dpi settings, unlike its predecessors (though I'd rather it applies straight away). The problem with small monitors in Windows is that when you come across windows too big to fit the screen, you can't move the title bar far above it. In X window managers I used in the past, you could alt-drag the window to anywhere you want, but in Windows, even if you alt-space and select move, it will automatically push the window back until the title bar is visible. I'm looking for a solution that either: allows me to move window freely without regard to titlebar visibility, or attach a scrollbar to existing window, or EDIT: create virtual desktops that allow me to span windows over 2 desktops, or EDIT 2: allow me to set larger virtual resolution, then pan & scan. EDIT 3: I found some progs that might do some of the above: 1) AltDrag allows me to drag, resize using alt and left/right mouse button. Neat! Best solution so far. 2) GiMeSpace Desktop Extender is supposed to allow me to scroll desktop. Didn't work. The other new version, GiMeSpace Ultimate Taskbar worked, but it destroys my Superbar, replacing it with its map.

    Read the article

  • PC in POST loop

    - by Antony Scott
    Hi, I have a custom built PC using a Gigabyte GA-EP35-DS3P motherboard with a Q6600 CPU. For the last 2 days it has got itself stuck into a POST loop. Saying that, I don't think it actually got in to the BIOS. It repeatedly lit up the LEDs and then not much more. Sometimes I could see the CPU fan twitch. Today I re-seated the DIMMs and it powered up straight away. Could this be a sign of an impending hardware failure? The PC is hooked up to a UPS, so I don't think it's a power spike or anything like that, as I have 2 other PCs on the same UPS and they're both fine. Yesterday, the first time this happened, I was getting a message which I think said "Scanning BIOS image on hard drive". I've been building and using PCs for well over 25 years and that's a new one on me! I don't think it's an over heating problem, as when the PC does finally boot up the CPU is running at 35-40C. Any help or suggestions would be greatly appreciated.

    Read the article

  • Follow through - How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by DNSDC
    I'm quite keen to setup similar service (but FREE) and seems you know how to do this. "you need to run your own private dns with artificial records for example pandora.com you also need a real dns to fall back on. now that all requests for these sites are going to your US located box you can open up port 80 on squid and listen for the traffic. your cache_peer settings should allow you to map each domain to their real ip. The trafic now flows initially from your US located box to the service but then the server responds it responds directly to the host. no magic here. I won't share the fine details as it probably best serves all to not over exploit this." Did you mean we need to 1. Setup Forward-only DNS on a US-based server/ip? 2. Setup cache_peer and cache_peer_domain in Squid, I got this. 3. Any iptables rule, prerouting, postrouting rules needed to accomplish this? Appreciate your expert advice. Cheers, Don

    Read the article

  • NTbackup doesn't complete on system state

    - by Joe Majsterski
    I have a Windows 2003 server that is running a semi-custom backup task. The scheduled task calls NTbackup with a few switches depending on whether it is a full or incremental backup. Most of the time, the NTbackup completes fine, and the wrapper then appends the NTbackup log into its own log before adding a few final comments and completing. The problem I am having is that sometimes, NTbackup seems to just... blank out. It always completes backup of the C: and E: drives, but then it will start the system state and not add any more messages into the event log saying it completed that. And the NTbackup log is left empty, since it doesn't write anything to the log until all the backup tasks are complete. This is causing the wrapper to append no text into its own log. That causes problems for us because we read the information out of that log to determine whether backups are failing. The wrapper task also reports that it is completing normally in the event log. Anyone ever seen a case where system state doesn't complete consistently? To be clear, the server is not logging any error messages anywhere. It's just not seeming to complete or log anything.

    Read the article

  • Does anyone know where I could find a 2 input USB voltage meter?

    - by John O
    What we really need is a tiny UPS, of sorts. We'll be hooking up a solar cell and a battery to a single board computer. Currently, that SBC is a custom Pic32 device, and it does it's own UPS and voltage monitoring duties. I've been tasked with trying to replicate all of its features with off the shelf products... and for the most part I've succeeded. But I don't currently have any way to switch between two sources of juice, or monitor when they're getting low. These guys have something: http://www.mini-box.com/picoUPS-100-12V-DC-micro-UPS-system-battery-backup-system I really like it, the price is well within the budget. We might even work it in though it does 12V and I'll probably be using 5V... there are enough engineers on hand to figure out something. But I'd still have no idea what the voltage was for the PV or battery. I was hoping that there was some simple little USB multimeter thing that I could use to monitor this with, but I can't seem to come up with anything. I've found all sorts of cool hardware, but nothing that will help us. Does anyone know of anything?

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • Make a drive from one machine appear as a physical disk in another machine.

    - by Roberto Sebestyen
    I want to take a physical disk (or part of a disk) in one machine (call it machine-A) and I want to make it available in another machine (machine-B). But I don't want to map a network drive. I want it to appear in machine-B as a physical drive. Even though it is not a physical drive. The reason I want to do this is i want the ability to create shares in machine-B on that drive. Since I cannot do that on mapped drives, I need to use some utility that fools machine-B to think that it is a physical drive, and treat it as such. Both of these machines are windows server 2003. I heard about NFS, It sounds like what could be the solution to my problem. But isn't that a Linux/Unix protocol? What tools can I use to make this happen? Are there any open source solutions? I don't care what the solution is, as long as it achieves the end result, preferably open source solution though. Thanks for reading guys and gals!

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • Vim move cursor one character in insert mode without arrow keys

    - by bolov
    This might seem a little too overboard, but I switched to vim and I so happy about the workflow now. I try to discipline myself not to use the arrow keys, as keeping the hands on the alfa-keys all the time is such a big thing when writing. So when I need to navigate I get out of insert mode, move in normal mode and get back in insert mode. There is an exception where this is actually more disrupting: I use clang complete with snippets and super tab which is great. Except every time I get a function auto completed after I fill in the parameters I am left with the cursor before ) so to continue I have to move the cursor one character to the right. As you can imagine this happens very often. The only options I have (as far as I know) are : Escla or ?, and I am not happy about neither of them. The first one makes me hit 3 keys for just a simple 1 character cursor move, the second one makes me move my hand to the arrow keys. A third option would be to map CTRL-L or smth to ?. So what is the best way of doing this? //snippets (clang complete + supertab): foo($`param1`, $`param2`) //after completion: foo(var1, var2|) ^ ^ | | I am here | Need to be here | denotes cursor position

    Read the article

  • How to make a redundant desktop system with daily snapshots? (Is btrfs ready for use?)

    - by TestUser16418
    I want to configure a desktop system in which the home filesystem would be redundant (e.g. RAID-1), and would have weekly snapshots taken. I've already done this with ZFS, the snapshot system is wonderful, and with send/recv you can easily create backups on external media. Unfortunately, at that point, I want GNU+Linux and not FreeBSD or Solaris, so I'm looking for suggestions for good alternatives. I reckon that my alternatives are: btrfs - it seems to be exactly what I need, it has snapshots and commands that allow you to easily replicate zfs send. Yet all documentation mentions that it's still experimental. I can't seem to find any actual reports on its reliability or usability issues. Can you point me to any information on that issue that could clarify whether it would be a possible choice? I have a large preference for this option, mostly because I don't want to reformat the drives when btrfs becomes ready, but I there's no information on whether it's usable at all, whether it's a silly idea to use it, etc. The question that I cannot get the answer to is what does "experimental" mean. lvm snapshots and ext4 - preferably not, since it can consume an awful amount of space when new files are created. Creating 200 GB files requres 200 GB free space and 200 GB additionally for snapshots. I also have found it unreliable -- failed metadata rewrite results in an unreadable PV. I'm wondering how btrfs would compare here. A single filesystem (ext4) on a RAID-1 array with custom COW snapshots with hardlinks (like cp -al). That's my current preference if I can't use btrfs. So how experimental btrfs is, which should I choose, and do I have any other options? What if I don't keep external incremental backups, would that affect my choice?

    Read the article

  • VirtualName-based local development host behind corporate proxy (MAMP)

    - by geerlingguy
    I am behind a corporate proxy server/firewall, and this firewall seems to not be too happy with my idea of local development. On my home computer (Mac/Leopard), I have MAMP running, with a rule in /etc/hosts that directs dev.example.com to 127.0.0.1, and I have a virtualhost set up in the httpd.conf file which works great for me. However, at work, I set up the exact same configuration, but am not able to access dev.example.com, likely due to some address/DNS translation going on via the proxy server. Here are the relevant details from Terminal: $ ping dev.example.com PING dev.example.com (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.025 ms $ host dev.example.com Host dev.example.com not found: 3(NXDOMAIN) I've tried adding dev.example.com to the list of bypass addresses in System Preferences (the 'Bypass proxy settings for these Hosts & Domains' list), but that had no effect. Is there any way I can develop locally using name-based hosts at work? I can access localhost, but can't get to the dev.example.com (or any other custom virtualhosts) here at work, which complicates other matters related to the sites on which I'm working...

    Read the article

  • Fix elements within objects at their position? (Prevent movement when resizing)

    - by Skadier
    I would like to know if it is possible to fix elements at their absolute position within custom elements in InDesign CS5? I created a kind of speech bubble and I would like to place a stripline within this bubble to separate two content areas. Just a little scheme to show the desired layout as Pseudo-Markup :D <speech-bubble> <textbox>HEADER SECTION</textbox> <stripline> <textbox>Some other text</textbox> </speech-bubble> I created something like this but with two separate elements which aren't connected. So I have to select both of them in order to move the whole bubble. Then I tried to connect them using Object->Paths->Create linked path but then the stripline moves and the HEADER SECTION moves too. All in all I would like to have a speech bubble which can be resized in order to hold more text but it shouldn't make the HEADER_SECTION larger or move the stripline. Hope you understand what I mean :D Thanks in advance!

    Read the article

  • Wildcard subdomain setup ... want to change host IP throws off client A records... what to do...

    - by Joe
    Here is the current set up (in a nutshell). The site is set up with a wildcard subdomain, so *.website.com is accessible. Clients can then domain map their own domains with an A record to the server IP address and it will translate the to appropriate *.website.com with re directions and env variables in htaccess. Everything is working perfect... but now comes the problem. The site has grown larger than a single DQC Xeon server can handle at peak times. Looking at cloud options seems tempting, but clients are pointing their domains to a single IP address with the A record (our server). Now, this was probably bad planing from the start, but the question is, if this was to be done today, how would we set it up so that clients use a CNAME perhaps to point their domains to our server rather than an A record. And, if that is not possible for the root domain, how can we then use multiple IP addresses on our side to translate the incoming http request? Complex enough? Hope I've explained it well!

    Read the article

< Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >