Search Results

Search found 44742 results on 1790 pages for 'create'.

Page 542/1790 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Installing 64-bit Ubuntu Server 12.04 LTS, on a VM with VMWare Player, on a 64-bit Windows 7 PC

    - by WannaBeAGeek
    I'm trying to create a VM, using VMWare Player, with an ISO image of Ubuntu Server 12.04 (LTS). The machine I'm doing the installation on has an Intel(R) Core(TM) i5 CPU, and runs 64-bit Windows 7 I managed to create the VM (gave username, password, configured network etc), but I can't install Ubuntu Server. First I get this alert : Binary translation is incompatible with long mode on this platform. Disabling long mode. Without long mode support, the virtual machine will not be able to run 64-bit code. For more details see http://vmware.com/info?id=152. When I click OK, I get another alert : This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible. This host supports Intel VT-x, but Intel VT-x is disabled. Intel VT-x might be disabled if it has been disabled in the BIOS/firmware settings or the host has not been power-cycled since changing this setting. (1) Verify that the BIOS/firmware settings enable Intel VT-x and disable 'trusted execution.' (2) Power-cycle the host if either of these BIOS/firmware settings have been changed. (3) Power-cycle the host if you have not done so since installing VMware Player. (4) Update the host's BIOS/firmware to the latest version. For more detailed information, see http://vmware.com/info?id=152. Then, when I click OK, my VM exists, and I get back to the VMWare Player home screen. I don't know much about hardware and virtualisation, so there might be some necessary info I'm not giving. Please don't hesitate to let me know what is missing in my post, for finding solutions. Thanks :)

    Read the article

  • Converting Audio To Video Output and Attaching Text?

    - by ZeeMan
    I am currently working on a project and before i get started i thought it'd be nice to check with stackOverflow community, and see maybe they can help me with this. The Idea: I have about a thousand MP3 files that i need to convert into Video files to be upload on Youtube for my work. Here is where it gets tricky i need to also attach the Text associated with the Audio to the Video as an Image. I was thinking .ppt. The Problem: I can do this one audio file at a time but it would take me a zillion years. lol!! The Question: Can I Create Some Kind Of Program Using Let's Say XML or JavaScript Or XHTML or some other programming language to do a MASS content creation and all i have to do is feed it the Information?? possibly a script?? or is it possible to create an example .ppt file and then hack it so that i can have it reproduce itself with different information?? The Note: Thanks U In Advance For Helping Out!!! Regards, ZeeMan!!!

    Read the article

  • Booting Ubuntu as VM with KVM on Ubuntu 12.04

    - by CrazycodeMonkey
    I am trying to boot my very first VM using KVM. I have Ubuntu 12.04 installed, i made sure the BIOS had the right virtualization flag enabled for intel processor by running kvm-ok. I have researched this on google and all the instructions that i have found so far are outdated. for e.g. most instructions talk about booting a virtual machine with the following commands qemu-img create -f qcow2 foo.img 100G --- create a virtual disk for your VM kvm --name foo -m 1024 -hda foo.img -cdrom whatever.iso -boot d -- This runs kvm. This command line is incomplete. First you need to be root to run this. Second- it is missing option for the video device. When you run this command you get the following error "Could not initialize SDL(No available video device) - exiting" Googled this error and looked it up on stackover flow http://stackoverflow.com/questions/4841908/sdl-init-failure-reason-is-no-available-video-device The answer provided here does not work on Ubuntu 12.04 Googled this problem further and found out that i need to specify a video device so I finally ran the following command sudo kvm --name mymachine -m 8096 -hda myimage.img --cdrom ubuntu.iso -boot d -vga cirruss -k en-us -vmc :0 This was after I had created the myimage.img image on the drive. Now this command does not give me an error but it just hangs. Does anyone have clear instructions on how to run a VM using KVM on Ubuntu?

    Read the article

  • DNSSEC - First Signature

    - by Arancha
    I'm testing DNSSEC with Bind 9.7.2-P2. I have a question regarding the first signature created over a zone that already exists. I'm using dynamic DNS. I create the first two keys: one KSK and one ZSK. According to https://datatracker.ietf.org/doc/draft-ietf-dnsop-dnssec-key-timing/, the first ZSK needs to be published for an interval equal to Ipub, before it can be active. I create the ZSK with a Publication date previous to its Activation date. I restart the service and I can see that the key is published at Publication date, but it's no active later, when Activation date arrives. This is the configuration of the zone dnssec.es at the named.conf file: zone "dnssec.es" { auto-dnssec maintain; update-policy local; sig-validity-interval 1; key-directory "dnssec/keys_dnssec"; type master; file "dnssec/db.dnssec.es"; }; Any clue?? Regards

    Read the article

  • NAT and P2P router crash

    - by returnFromException
    So..i had this argument with my networks teacher. He said that some people complains about router crashes due to many entrys on NAT tables on a router. I didnt understand and i asked: "If the application uses the same port, why does the router crash?. It should have only one entry (pc-ip,pcport;public-ip,public-port)". And he said: "it doesnt matter its using the same port". I got the idea that NAT creates an entry for every packet that passes trought it. Iam assuming NAT with overloading as you might have guessed. So the questions are: 1-How does nat entrys are created? On a packet basis or connection basis? I mean: suppose i send a udp packet..does the router create an entry? 2-When i start a TCP connection, does the router create a persistant nat entry until the connection closes? 3-Was my teacher right? The NAT table can overload assuming an aplication on the same port sending packets? Thanks in advance.

    Read the article

  • Users and Groups management on 7 Home Premium

    - by AviD
    Recently upgraded the home pc from XP pro, to Windows 7 Home Premium. I'm looking for a solution for a few things that seem to be missing from this edition... Since Local Users and Groups is blocked on Home Premium, I can't figure out how to manage groups, or even do anything even slightly advanced to users (basically, create/group/picture is it). net localgroup, net users, net etc dont seem to work - getting "system error 5". While I'm on the topic, I cant activate (what was once) "Local Security Policy"... Looking for any help, advice, or even a new direction cuz things is differ'nt on Winnows7... To clarify, I'm looking to do some of the following, which were simply back in XP-land: remote user only (i.e. no local logon) Grant special privileges for specific user grant access to e.g. C$ share for specific remote user create custom groups for users, to be able to separate privileges of say, my wife's from my kids define quite specifically what each user can do (beyond just standard users) Harden OS (hmm, i guess maybe what i'm looking for is security hardening guide for 7...?)

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • On linux, what does it mean when a directory has size 0 instead of 4096?

    - by kdt
    Here's a strange thing I haven't seen before -- a directory whose size is reported by ls as 0 instead of 4096, and I can't create any files within it. # ls -ld lib home drwxr-xr-x. 2 root root 0 Feb 7 03:10 home <-- it has zero size dr-xr-xr-x. 11 root root 4096 Feb 4 09:28 lib # touch home/foo touch: cannot touch `home/foo': No such file or directory <-- and I can't create files in it # rm home rm: cannot remove `home': Is a directory <-- look, it really is a dir So what does it mean for a directory to have size 0 instead of 4096? Filesystem is ext4 on fedora core 14. The output of mount is: /dev/mapper/vg_dev-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/vda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Output of du -s /home: 0 /home Output of stat /home: File: `/home' Size: 0 Blocks: 0 IO Block: 1024 directory Device: 15h/21d Inode: 34913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-02-07 03:45:46.188995765 -0800 Modify: 2011-02-07 03:11:59.980995019 -0800 Change: 2011-02-06 07:58:45.874995002 -0800

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • Configuring Vmware virtual machines to run under different IPs and PC specs

    - by Alex
    Right now I'm using a simple VmWare virtual machine with preinstalled Win 7. The IP is assigned automatically (it's the same as main OS IP). Is it possible to create several virtual machines that have different hardware specifications and different IP addresses? Here is what I mean regarding these issues: Specs: Certainly, you can easily change some specifications in the Settings menu (RAM size, HDD size), but what about advanced settings? For example: advanced settings for the Processor: is it AMD (2500+,4000+, etc.. ) or Intel (core 2, Pentium, etc..) Ram - is it Corsair 4 Gb 1333 Mhz or Kingston 2 x 2 Gb 866Mhz or something else? Hdd - Is it Seagate Barracuda 80 gb 5400 Rpm or is it Samsung 500Gb 7200 Rpm or some random SSD? Programs that work under a Virtual Machine shouldn't have a clue if that's a VmWare or not. IPs: Every program that's launched under main OS use the real IP: 93.56.xx.xx All programs that are launched under virtual machine A use IP 1: 74.78.xx.xx All programs that are launched under virtual machine B use IP 2: 84.159.xx.xx I believe that you have to use either VPN or Proxy to solve this problem. The Sum Up: The idea is to create 2-3 independent virtual machines with different hardware specifications and IP addresses. Programs that work under a certain Virtual Machine shouldn't have a clue if that's a VmWare or the real PC. Any ideas/tips or experience regarding configuration will be appreciated!

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • IIS 7.0 rewrite url problem

    - by Jouni Pekkola
    Hello, How i can set redirect url for virtual directory in iis 7.0.I have installed lates url rewrite module 2. ? I could explain my problem with exsample. I have website on my iis 7.0 server: www.mysite.com I desided to create virtual directory sales under my site which is pointing to website root directory.Now I need create redirect url for the vdir. The vdir is pointing same virtual root directory as my site root is The big idea is that i can write on browser www.mysite/sales and i will automaticly redirect to url www.mysite.com?productid=200. I tried to make redirect with rewite url for vdir(not website), but I always get this error message : cannot add duplicate colletion entry of type 'rule' with unique key key attribute 'name' set to "test".This happens when i am pointing for virtual vdir and try to add rule. I can add rules to website level,but rules doesn work. I mean url www.mysite/sales gives me follwing error. I know that key is unique I checked it from web.config. This kind of feature was really easy use in IIS 6.0, just point vdir with your mouse and set properties--a redirect to url. Please some one explain what is right way to do it in IIS 7.0

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Handling emails on a web server - Making sure the FQDN is set correctly based on the website sending the email

    - by webnoob
    I have a Windows 2008 Web Edition server hosting multiple websites using IIS 7.5. At the moment, all the emails are sent via the IIS6 SMTP service. The FQDN of the SMTP service is set to the computer name at the moment which isn't correct as it doesn't resolve to a valid DNS entry and is not RFC compliant. Some questions: Is there any way I can change the FQDN of the SMTP service based on the site sending the email? Would it be Ok to just setup mailserver.mydomain.com and use that as the FQDN for all the sites on multiple domains. Should I be using some other mail server software to handle this better? The reason I am asking is lots of emails are hitting spam folders because the settings are incorrect. I have access to the code that is running the websites so if something needs to be done there then that shouldn't be a problem. The sites are written using ASP.NET 2.0. EDIT: I have just found an option to create an SMTP virtual service. Would this be the way forward? Create a virtual server for each site? Thanks.

    Read the article

  • Unable to Align Layers in Photoshop Properly with CS2

    - by Jonathan Sampson
    Cannot Align Semi-Transparent Items? Windows Vista, Photoshop CS2. Steps to repeat: Create new document Fill a circle on a new layer Drop opacity of filled circle to 10% Create new empty layer below circle layer Merge empty layer with filled circle layer Select entire canvas Attempt to align layers to selectionlayer > align layers to selection > vertical centers I get the following error: Could not complete the Vertical Centers command because there are no layers to be moved. Clearly this is not true, as I'm selecting the layer with the semi-translucent ball on it. Now, if you had tried this same command prior to step 5 (when the layer was at 10% opacity) it would have worked. Is there some way around this problem? I need to move layers around that begin as transparent items, with a layer opacity at 100% where 100% of the layers opacity results in showing objects that are themselves not-very opaque. I've confirmed on another machine that this problem doesn't exist in CS3. I may exist in earlier copies of Photoshop, but I only have access to CS2 (has the problem) and CS3 (does not have the problem).

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Looking for a software / something to automate some simple audio processing

    - by Daniel Magliola
    I'm looking for a way to take a 1-hour podcast MP3 file and split it into several several 2-minute MP3s. Along the way, I'd like to also do a few things like Amplify the volume. The problem I'm solving is that I have a crappy MP3 player that won't let me seek forward or backward, nor will it remember where I left it when I turn it off, plus, I listen to these in a seriously high-noise situation. Thus, I need to be able to skip forward in large chunks (2-5 minutes) to the point where I left it. Is there any decent way to do this? Audacity doesn't seem to have command-line capabilities. I'm willing to write some code, for example, to call something over the command line and get how long the MP3 file is, to later know how many pieces i'll have, and then say "create an MP3 with 0:00 to 2:00", "create an MP3 with 2:00 to 4:00", etc. I'm also willing to pay for the right tools if necessary. I also don't care how slow this runs, as long as I can automate it :-) I'm doing this on Windows. Any pointers / ideas? Thanks!

    Read the article

  • Should I use nginx exclusively, or have it as a proxy to Tomcat (performance related)?

    - by Kevin
    I've planned to create a website that'll be pretty heavy on dynamic content, and want to know what would be the wisest choice for part of my webstack. Right now I'm trying to decide whether I should develop upon nginx, using PHP to deliver the dynamic content, or use nginx as a proxy to Tomcat and use servlets to deliver the dynamic content. I have a good amount of experience with Java, JSP, and servlets, so that's a plus right off the bat. Also, since it is a compiled language, it will execute faster than PHP (it is implied here that Java is around 37x faster than PHP) , and will create the web pages faster. I have no experience with PHP, however i'm under the impression that it is easy to pick up. It's slower than Java, but since the client will only be communicating with nginx, I'm thinking that serving the dynamically created web pages to the client will be faster this way. Considering these things, i'd like to know: Are my assumptions correct? Where does the bottleneck occur: creating pages or serving them back to the client? Will proxying Tomcat with nginx give me any of nginx performance benefits if I'm going to be using Tomcat to generate the dynamic content (keeping in mind my site is going to be heavy in this aspect)? I don't mind learning PHP if, in the end, its going to give me the best performance. I just want to know what would be the best choice from that standpoint.

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >