Search Results

Search found 44742 results on 1790 pages for 'create'.

Page 542/1790 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Adding 2nd DC to the domain from a different subnet over VPN.

    - by EagerToLearn
    I'm in the process of adding a second DC to our domain and just want to make sure I have all the steps right before proceeding. Info: DC1 is 2008 R2 Standard. DC2 is 2008 R2 Standard. Network1 is 192.168.39.x/24 Network2 is 10.0.0.x/24 VPN is Sonicwall. The 2 DC's will be at two different sites, but the networks are connected by hardware VPN. (Sonicwall). The main DC server will be on the 192.168.39.0/24 network. The 2nd DC will be on 10.0.0.0/24. Here are the steps I plan to take; please let me know if I'm missing anything. Part 1: AD Sites and Services on DC1, create a new site and subnet for DC2. (Or should I create a new one for both?) (Can I use the default IPSiteLink and not change anything in there other than refresh timer?) Part 2: Point the DNS of DC2 to DC1. Run /forestprep and /domainprep (on both, or just DC1?). Dcpromo and select "Additional Domain Controller for Existing Domain". Then continue with normal steps with default locations for databases. EDIT: Didn't realize this was like reddit and required two skipped lines to skip one :P EDIT 2: When DCPromo-ing DC2, do I need to have "Append primary and connection specific DNS" and "Append parent suffixes of the primary DNS suffix" checked?

    Read the article

  • make pentaho report for ubuntu 11.04

    - by Hendri
    currently i'm trying to install Pentaho Reports for OpenERP which is refer from https://github.com/WillowIT/Pentaho-...rver/build.xml i ever installed on some laptops which is Windows Based and it's working, but currently i'm trying on UBuntu 11.04 OS, it prompted me error like this "error build.xml:18: failed to create task or type.." below is the steps i did : 1. install java-6-openjdk comment : "apt-get install java-6-openjdk" then i set installed java jdk into java_home environment command: "nano /etc/environment" add environment with this new line : JAVA_HOME="/usr/lib/jvm/java-6-openjdk" I install apache ant command : "apt-get install ant" followed by setting the evnironment command: "nano /etc/environment" add environment with this new line : ANT_HOME="/usr/share/ant" try to check installation with command "ant"... I get message like this: Buildfile: build.xml does not exist! Build failed then download java server from https://github.com/WillowIT/Pentaho-...rver/build.xml and then copied to Ubuntu share folder and then on command form, i goto extracted path which is share folder i mentioned and executed command "ant war " and i got error message : BUILD FAILED /share/java_server/build.xml:18: problem: failed to create task or type antlibrg:apacge.ivy.ant:retrieve cause: The name is undefined. Action:Check the spelling. Action:Check that any custom taks/types have been declared Action:Check that any /declarations have taken place. No types or taks have been defined in this namespace yet This appears to be an antlib declaration. Action:Check that the implementing library exists in one of: -/usr/share/ant/lib -/root/.ant/lib -a directory added on the command line with the -lib argument Total time:0 seconds is there any compability issue? or i miss out some steps? i'm in the some project to rush with for reporting, so please help me to solve this issue i look forward to your corporation to help me solve this issue, thanks a lot in advance Thx Best Regards,

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

  • How to set up hosts file for local environment?

    - by n00b0101
    I'm trying to create subdomains on my localhost and am way out of my territory... I'm running MAMP on my Mac OS X and I thought/think I had/have to do the following: (Assuming I want to create me.localhost.com and you.localhost.com) (1) Edit /private/etc/hosts Right now, it looks like this: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost So, do I just make it: 127.0.0.1 localhost 127.0.0.1 me.localhost.com 127.0.0.1 you.localhost.com 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost (2) I'm assuming I don't need to mess with DNS at all because it's local? So, the hosts file should suffice? (3) And then, I need to edit my httpd.conf file to include virtual hosts? I tried this, but it's not picking it up... NameVirtualHost * <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/me.localhost.com" ServerName me.localhost.com </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/you.localhost.com" ServerName you.localhost.com </VirtualHost> Not sure if I'm way off-base here... Help is greatly appreciated!

    Read the article

  • Converting Audio To Video Output and Attaching Text?

    - by ZeeMan
    I am currently working on a project and before i get started i thought it'd be nice to check with stackOverflow community, and see maybe they can help me with this. The Idea: I have about a thousand MP3 files that i need to convert into Video files to be upload on Youtube for my work. Here is where it gets tricky i need to also attach the Text associated with the Audio to the Video as an Image. I was thinking .ppt. The Problem: I can do this one audio file at a time but it would take me a zillion years. lol!! The Question: Can I Create Some Kind Of Program Using Let's Say XML or JavaScript Or XHTML or some other programming language to do a MASS content creation and all i have to do is feed it the Information?? possibly a script?? or is it possible to create an example .ppt file and then hack it so that i can have it reproduce itself with different information?? The Note: Thanks U In Advance For Helping Out!!! Regards, ZeeMan!!!

    Read the article

  • DNSSEC - First Signature

    - by Arancha
    I'm testing DNSSEC with Bind 9.7.2-P2. I have a question regarding the first signature created over a zone that already exists. I'm using dynamic DNS. I create the first two keys: one KSK and one ZSK. According to https://datatracker.ietf.org/doc/draft-ietf-dnsop-dnssec-key-timing/, the first ZSK needs to be published for an interval equal to Ipub, before it can be active. I create the ZSK with a Publication date previous to its Activation date. I restart the service and I can see that the key is published at Publication date, but it's no active later, when Activation date arrives. This is the configuration of the zone dnssec.es at the named.conf file: zone "dnssec.es" { auto-dnssec maintain; update-policy local; sig-validity-interval 1; key-directory "dnssec/keys_dnssec"; type master; file "dnssec/db.dnssec.es"; }; Any clue?? Regards

    Read the article

  • Configuring Vmware virtual machines to run under different IPs and PC specs

    - by Alex
    Right now I'm using a simple VmWare virtual machine with preinstalled Win 7. The IP is assigned automatically (it's the same as main OS IP). Is it possible to create several virtual machines that have different hardware specifications and different IP addresses? Here is what I mean regarding these issues: Specs: Certainly, you can easily change some specifications in the Settings menu (RAM size, HDD size), but what about advanced settings? For example: advanced settings for the Processor: is it AMD (2500+,4000+, etc.. ) or Intel (core 2, Pentium, etc..) Ram - is it Corsair 4 Gb 1333 Mhz or Kingston 2 x 2 Gb 866Mhz or something else? Hdd - Is it Seagate Barracuda 80 gb 5400 Rpm or is it Samsung 500Gb 7200 Rpm or some random SSD? Programs that work under a Virtual Machine shouldn't have a clue if that's a VmWare or not. IPs: Every program that's launched under main OS use the real IP: 93.56.xx.xx All programs that are launched under virtual machine A use IP 1: 74.78.xx.xx All programs that are launched under virtual machine B use IP 2: 84.159.xx.xx I believe that you have to use either VPN or Proxy to solve this problem. The Sum Up: The idea is to create 2-3 independent virtual machines with different hardware specifications and IP addresses. Programs that work under a certain Virtual Machine shouldn't have a clue if that's a VmWare or the real PC. Any ideas/tips or experience regarding configuration will be appreciated!

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • What is the quickest and safest way to test new software and revert all changes, if needed?

    - by calbar
    I'm looking for Windows software that will allow me to quickly create a "checkpoint", do whatever I might need to do to my computer - install programs/drivers/updates, create/delete personal files, reboot the system multiple times, open questionable attachments - and then revert the entire system back to when the checkpoint was created. Essentially I want Windows Restore Points that save my personal files and partitions, too. It sounds like disk imaging might be the ticket, but creating them is much too slow and the restore process too involved... I'm hoping to sacrifice full disaster recovery for speed. Creating a checkpoint should be as close to one-click as possible, and rolling back should be a matter of selecting a restore point and rebooting. Ding! I'm familiar with Sandboxie, True Image Home "Try and Decide", Returnil, and a number of other "virtual system" apps that actively "catch" changes and allow you to commit or reject them. I'm not interested in these for a number of reasons - I prefer the "cut and dry" restore point approach. Finally, I'll note that I've just recently become aware of Comodo Time Machine. It sounds absolutely perfect, however, a quick skim through the user forums show more than a few horror stories of corrupted, unbootable systems. Any positive personal experience with the software to suppress my superstitions, or suggestions for more established alternatives would be greatly appreciated - Comodo Time Machine seems relatively new. Thanks for your help!

    Read the article

  • On linux, what does it mean when a directory has size 0 instead of 4096?

    - by kdt
    Here's a strange thing I haven't seen before -- a directory whose size is reported by ls as 0 instead of 4096, and I can't create any files within it. # ls -ld lib home drwxr-xr-x. 2 root root 0 Feb 7 03:10 home <-- it has zero size dr-xr-xr-x. 11 root root 4096 Feb 4 09:28 lib # touch home/foo touch: cannot touch `home/foo': No such file or directory <-- and I can't create files in it # rm home rm: cannot remove `home': Is a directory <-- look, it really is a dir So what does it mean for a directory to have size 0 instead of 4096? Filesystem is ext4 on fedora core 14. The output of mount is: /dev/mapper/vg_dev-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Output of du -s /home: 0 /home Output of stat /home: File: `/home' Size: 0 Blocks: 0 IO Block: 1024 directory Device: 15h/21d Inode: 34913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-02-07 03:45:46.188995765 -0800 Modify: 2011-02-07 03:11:59.980995019 -0800 Change: 2011-02-06 07:58:45.874995002 -0800

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • Users and Groups management on 7 Home Premium

    - by AviD
    Recently upgraded the home pc from XP pro, to Windows 7 Home Premium. I'm looking for a solution for a few things that seem to be missing from this edition... Since Local Users and Groups is blocked on Home Premium, I can't figure out how to manage groups, or even do anything even slightly advanced to users (basically, create/group/picture is it). net localgroup, net users, net etc dont seem to work - getting "system error 5". While I'm on the topic, I cant activate (what was once) "Local Security Policy"... Looking for any help, advice, or even a new direction cuz things is differ'nt on Winnows7... To clarify, I'm looking to do some of the following, which were simply back in XP-land: remote user only (i.e. no local logon) Grant special privileges for specific user grant access to e.g. C$ share for specific remote user create custom groups for users, to be able to separate privileges of say, my wife's from my kids define quite specifically what each user can do (beyond just standard users) Harden OS (hmm, i guess maybe what i'm looking for is security hardening guide for 7...?)

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • Handling emails on a web server - Making sure the FQDN is set correctly based on the website sending the email

    - by webnoob
    I have a Windows 2008 Web Edition server hosting multiple websites using IIS 7.5. At the moment, all the emails are sent via the IIS6 SMTP service. The FQDN of the SMTP service is set to the computer name at the moment which isn't correct as it doesn't resolve to a valid DNS entry and is not RFC compliant. Some questions: Is there any way I can change the FQDN of the SMTP service based on the site sending the email? Would it be Ok to just setup mailserver.mydomain.com and use that as the FQDN for all the sites on multiple domains. Should I be using some other mail server software to handle this better? The reason I am asking is lots of emails are hitting spam folders because the settings are incorrect. I have access to the code that is running the websites so if something needs to be done there then that shouldn't be a problem. The sites are written using ASP.NET 2.0. EDIT: I have just found an option to create an SMTP virtual service. Would this be the way forward? Create a virtual server for each site? Thanks.

    Read the article

  • How to access original files from before a symlink gets updated, which have since been moved to another dir

    - by Luke Cousins
    We have a website and our deployment process goes somewhat like the following (with lots of irrelevant steps excluded) echo "Remove previous, if it exists, we don't need that anymore" rm -rf /home/[XXX]/php_code/previous echo "Create the current dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/php_code/current echo "Create the var_www dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/var_www echo "Copy current to previous so we can use temporarily" cp -R /home/[XXX]/php_code/current/* /home/[XXX]/php_code/previous/ echo "Atomically swap the symbolic link to use previous instead of current" ln -s /home/[XXX]/php_code/previous /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live # Rsync latest code into the current dir, code not shown here echo "Atomically swap the symbolic link to use current instead of previous" ln -s /home/[XXX]/php_code/current /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live The problem we are having and would like help with is that, the first thing any website page load does is work out the base dir of the application and define it as a constant (we use PHP). If then during that page load a deployment occurs, the system tries to include() a file using the original full path and will get the new version of that file. We need it to get the old one from the old dir which has now moved as in: System starts page load and determines SYSTEM_ROOT_PATH constant to be /home/[XXX]/var_www/live or by using PHP's realpath() it could be /home/[XXX]/php_code/current. Symlink for /home/[XXX]/var_www/live get updated to point to /home/[XXX]/php_code/previous instead of /home/[XXX]/php_code/current where it did originally. System tries to load /home/[XXX]/var_www/live/something.php and gets /home/[XXX]/php_code/current/something.php instead of /home/[XXX]/php_code/previous/something.php I'm sorry if that is not explained very well. I'd really appreciate some ideas on how to get around this problem if someone can. Thank you.

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Unable to Align Layers in Photoshop Properly with CS2

    - by Jonathan Sampson
    Cannot Align Semi-Transparent Items? Windows Vista, Photoshop CS2. Steps to repeat: Create new document Fill a circle on a new layer Drop opacity of filled circle to 10% Create new empty layer below circle layer Merge empty layer with filled circle layer Select entire canvas Attempt to align layers to selectionlayer > align layers to selection > vertical centers I get the following error: Could not complete the Vertical Centers command because there are no layers to be moved. Clearly this is not true, as I'm selecting the layer with the semi-translucent ball on it. Now, if you had tried this same command prior to step 5 (when the layer was at 10% opacity) it would have worked. Is there some way around this problem? I need to move layers around that begin as transparent items, with a layer opacity at 100% where 100% of the layers opacity results in showing objects that are themselves not-very opaque. I've confirmed on another machine that this problem doesn't exist in CS3. I may exist in earlier copies of Photoshop, but I only have access to CS2 (has the problem) and CS3 (does not have the problem).

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >