Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 607/750 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • What is the ip range of EC2

    - by Nicolas Kassis
    I'd like to setup a rule to block ssh request from EC2 since I've been seeing a large amount of ssh based attack from there and was wondering if anyone knew what their IP ranges are. EDIT: Thank you for the answer, I went ahead and implemented the iptables rules as follow. I ignore all traffic for the moment. Logging it just to see if the rules are working and for stats on how much crap EC2 is sending out ;) #EC2 Blacklist $IPTBLS -A INPUT -s 67.202.0.0/18 -j LOG --log-prefix "<firewall> EC2 traffic " $IPTBLS -A INPUT -s 67.202.0.0/18 -j DROP $IPTBLS -A INPUT -s 72.44.32.0/19 -j LOG --log-prefix "<firewall> EC2 traffic " $IPTBLS -A INPUT -s 72.44.32.0/19 -j DROP $IPTBLS -A INPUT -s 75.101.128.0/17 -j LOG --log-prefix "<firewall> EC2 traffic " $IPTBLS -A INPUT -s 75.101.128.0/17 -j DROP $IPTBLS -A INPUT -s 174.129.0.0/16 -j LOG --log-prefix "<firewall> EC2 traffic " $IPTBLS -A INPUT -s 174.129.0.0/16 -j DROP $IPTBLS -A INPUT -s 204.236.192.0/18 -j LOG --log-prefix "<firewall> EC2 traffic " $IPTBLS -A INPUT -s 204.236.192.0/18 -j DROP $IPTBLS -A INPUT -s 204.236.224.0/19 -j LOG --log-prefix "<firewall> EC2 traffic " $IPTBLS -A INPUT -s 204.236.224.0/19 -j DROP $IPTBLS -A INPUT -s 79.125.0.0/17 -j LOG --log-prefix "<firewall> EC2 traffic " $IPTBLS -A INPUT -s 79.125.0.0/17 -j DROP

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • Differential backup missing moved folders (flawed archive attribute logic)

    - by Max
    Recently I've discovered that my backup system it flawed: there are situation where various files/folders are missed. I do my backup from local disk to a network NAS. I use Cobian backup, and I have setup the backup software to create one full backup every week, and one differential backup every day. Now, the backup software (to my knowledge any backup software work this way) decide the files that go in the differential backup by looking at the file archive attribute. If the attribute is set, then the file go in to the backup. Now, when you move a file to a new location, on Windows systems, the archive attribute get set and the file is included in the backup, and that's fine... but when you move an entire folder, no archive attribute is set, nor on the folder, nor in any files inside the folder, so the moved folder isn't included in the differential backup! So, if you have a full backup plus a differential backup, and you moved folders around... then it's impossible to reconstruct the original files/folders structure starting from the full+differential backup, because the backup software didn't include the moved folders in the differential backup. So my differential backup are useless... Why does windows set the archive attribute when moving a file, but not when moving a folder? How can I deal with this issue? Is there a way to create a differential backup that works as it's supposed to do? Doing full backup every day is not practical, because the changed data is about 0.1% at day (by using a differential backup I can keep 4 weeks of files history without using too much disk space.)

    Read the article

  • Installing Windows Management Framework 3.0 basically destroyed WMI, how can I fix it without reinstalling the O.S.?

    - by Massimo
    Related, of course, to this question. Before discovering it was somewhat... dangerous, I installed Windows Management Framework 3.0 on a number of Windows Server 2008 R2 SP1 servers, and WMI got completely trashed on all of them. This is what the WMI namespace looks like on a normal server (this is from Server Manager - Configuration - WMI Control): This is what it looks like after installing WMF 3.0: Yeah. Everything except WMF 3.0's new features is gone. Needless to say, nothing seems to work anymore on those servers. And no, this is not due to some strange installation error, this happened on three servers which were perfectly working before installing WMF 3.0, and on all of them the installation completed succesfully. Admittedly, one of them had a somewhat complex setup (various System Center products and SQL Server instances)... but two of them are just plain standard domain controllers which do nothing else at all. How can I fix this mess without having to reinstall the O.S. on these servers? And why did it happen in the first place?

    Read the article

  • Windows 8 Pro Homegroup Not Allowing Access to Shared Files

    - by Jack Herr
    I have two pc's and a laptop. With all three running Windows 7, homegroup worked perfectly, with all computers able to access the files on the others exactly as one would expect. I installed Windows 8 Pro, which I downloaded recently from the MSDN website, on the laptop. From the info on the MSDN website, I believe the version downloaded is the official release version, not a release candidate. The install preserved all settings, apps, and files. I cannot connect to the homegroup from the laptop. Well, not exactly. I joined the homegroup during the install, and later left and rejoined, that was advertised as being offered by one of the pc's (not sure why that one was picked and not the other or both). This homegroup appears in my "computer" folder in the file explorer, offering files from that pc. But it excludes documents (only the music, pix, videos, and playlists folders on the one computer appear). I can actually access those files from the laptop. But the Homegroup folder, which actually shows all the computers by account and computer name and includes documents from both computers, does not open those folders when clicked. The computers listed by name in the Network folder give me a login that doesn't work. It asks for a username and password, no combinations of which work. Further, the following error message appears before the login is even attempted: "The system cannot contact a domain controller to service the authentication request. Please try again later." The two windows 7 pc's continue to share all files, including documents, seemlessly. I can also get to the laptop shared files, which I shared during setup, from either pc. Any ideas?

    Read the article

  • SharePoint Web Analytics not tracking usage for main application

    - by Chris W
    My SP 2010 setup is two separate applications - one for the main portal and one for MySite. Whilst WebAnalytics is tracking usage of MySite it's not showing any stats for the main Portal. The only thing it lists is the number of site collections but no page views etc. The WA service is clearly running to pick up data for MySite. In Configure web analytics and health data collection everything is ticked. I can't find any obvious settings that are different between the two applications. Where should I look to get usage tracking correctly? Edit: Having played with the date ranges I see that actually I've got no stats in the last 7 days for any site at all including MySite which has been working at some point previously. Edit: What does each service (WA Data Processing Service vs WA Web Services) do and where should they be active? At present they're both running on an App server but not on the WFEs (although they were running on WFEs previously). From what I can gather than only need to run on an App server but I find it strange that the only logged activity I see in the staging database relates to Central Admin URLs on the App server and nothing from the WFEs.

    Read the article

  • Standard PHP configuration command for centos 6.5

    - by Krishna
    First of all let me tell you my knowledge in managing server and configuring script is very basic. Now I am trying to setup a server for my personal use. My server is up and running in Centos 6.5 with PHP, MySql and ISPConfig. I felt that even PHP is installed it needs some configuration for a standard web server. I searched on and found some 'configure' command but none working. The reason they did not have a specific step by step guidance on how to do this. Can somebody help me to configure PHP in a standard way? Thanks in advance. Some more detail, as required - 1. I did not install PHP from source. 2. Yes, I want some standard modules to be enabled in PHP 3. I also want to know do I need to change some settings on default? I do have a managed VPS, so do you suggest me to copy php.ini from that VPS to new dedicated server so all required settings which is working fine for me to be copied here also? Thanks

    Read the article

  • should i link to a blog site or install my own blog engine?

    - by dc
    we're setting up a company blog. Our technology stack is .NET. Should we just use blogger/wordpress for the blog and redirect to it from our site? or should i install a blog engine directly on our site (e.g. blogEngine.NET)? some considerations i'd like feedback on are: 1.SEO - if you host your blog on wordpress/blogger instead of installing it on your site - will you get better page rankings? (if the content was the exact same) 2.scalability - i've read that dotNetBlogEngine doesnt scale well on web farms etc. our website is setup to be stateless. 3.security - presumably a hosted blog site has the advantage of having regular security updates. how easy is it to keep an installed blog engine patched? 4.examples of installed blog engines - dotNetBlogEngine seems to be the best but has a couple of limitations. can anyone suggest another one (n/a if you're advice is to host the blog on blogger/wordpress) 5.any other comments/issues/concerns we should be aware of? thanks for your feedback!

    Read the article

  • Setting up VPN with Snow Leopard Server and Linksys router

    - by SueP
    I'd like to get VPN going so I can log in to the office securely from home. I'm using Snow Leopard machines everywhere, and currently have Airport Extremes set up at home and at the office. I have a mac mini with Snow Leopard Server that I'm going to move to the office to act as my server. I just bought a Linksys 4-port router because it says it does VPN (model RVS4000). My problem is, I don't have a clue how to set this thing up, and the more reading I do, the more confused I get. Do I need two of these routers, one at each end? My laptop and iPad claim they can do VPN, so I was assuming I only needed one VPN router? At this point, I literally don't know what questions to ask, or where to plug this thing in. Presumably, between the modem and the airport, but...? If somebody can walk me thru some really basic setup, I'd be very grateful. Right now, I feel like going outside and screaming for a while. But that might attract the local cougar, and after the prints I saw on the arena this afternoon, I don't want to draw its attention. :-)

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • v2v of RHEL5 box - issues with retaining MAC address

    - by Alex Berry
    For the last week we have been troubleshooting a customer's Red Hat Virtual Machine running on ESXi. We've been using Veeam to try to create a replica off-site and have been having getting it to work on a decent schedule and recently we noticed that there were issues with orphaned snapshots while looking at the datastore. You can see several snapshots in the same folder and it's causing issues with replication and backup, so we decided the cleanest way was to v2v the machine to another datastore so that we had a clean single-vmdk setup to work with, this is where our trouble started. We first started off with a v2v using vmware converter and connecting to the powered on machine as we were having issues doing an offline v2v. This copied fine but when I tried to set a static MAC using this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=507 the new VM wouldn't take the address, it simply obtained a new MAC, received a dhcp lease and then would only boot up to a blank red screen, never the login screen. So the next step was to do an offline v2v, once we finally got it working. Same thing, followed the kb to the letter and still it wouldn't take the MAC. I then tried it again and upon completion I compared both old and new VMX file, copying every identifier and variable possible, then unregistered both VMs, uploaded the new VMX file and booted, only to see the same results. Finally I did the same as above but I copied the disk using DD to a second attached vmdk and then attached this to the new VM, and still no luck. After downloading the modified VMX file after the first boot and comparing it to the original I created I found that the bios uuid had changed from the one I typed in manually, so I'm assuming this may be the snagging point, but I have no idea. I've never had this issue before on a P2V and I'm just wondering if someone could shed some light on this, maybe it's to do with RHEL licencing?

    Read the article

  • Is my OCZ SSD aligned correctly? (Linux)

    - by Barney Gumble
    I have an OCZ Agility 2 SSD with 40 GB of space. I use it as a system drive in Debian Linux (Squeeze) and in my opinion it's really fast. But I've read a lot on aligning partitions and file systems... And I'm not sure if I succeeded in aligning the partitions correctly. Maybe the SSD could be even faster?? ;-) I use ext4 and here is the output of fdisk -cul: Disk /dev/sda: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders, total 78161328 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: [...] Device Boot Start End Blocks Id System /dev/sda1 * 2048 73242623 36620288 83 Linux /dev/sda2 73244670 78159871 2457601 5 Extended /dev/sda5 73244672 78159871 2457600 82 Linux swap / Solaris My partitions were created just by the Debian Squeeze setup assistant. So I didn't care about the details of partitioning. But now I think maybe the installer didn't align it correctly? Actually, 2048 looks good to me (better than odd values like 63 or something like that) but I've no idea... ;-) Help plz! According to some "SSD Alignment Calculator" I found on the web, the OCZ SSDs have a NAND Erase Block Size of 512kB and their NAND Page Size is 4kB. 2048 is divisible by 4 and 512. So are the partitions aligned correctly?

    Read the article

  • Apache Server Redirect Subdomain to Port

    - by Matt Clark
    I am trying to setup my server with a Minecraft server on a non-standard port with a subdomain redirect, which when navigated to by minecraft will go to its correct port, or if navigated to by a web browser will show a web-page. i.e.: **Minecraft** minecraft.example.com:25565 -> example.com:25465 **Web Browser** minecraft.example.com:80 -> Displays HTML Page I am attempting to do this by using the following VirtualHosts in Apache: Listen 25565 <VirtualHost *:80> ServerAdmin [email protected] ServerName minecraft.example.com DocumentRoot /var/www/example.com/minecraft <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/example.com/minecraft/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> <VirtualHost *:25565> ServerAdmin [email protected] ServerName minecraft.example.com ProxyPass / http://localhost:25465 retry=1 acquire=3000 timeout=6$ ProxyPassReverse / http://localhost:25465 </VirtualHost> Running this configuration when I browse to minecraft.example.com I am able to see the files in the /var/www/example.com/minecraft/ folder, however if I try and connect in minecraft I get an exception, and in the browser I get a page with the following information: minecraft.example.com:25565 -> Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /. Reason: Error reading from remote server Could anybody share some insight on what I may be doing wrong and what the best possible solution would be to fix this? Thanks.

    Read the article

  • Computer does not boot after ram upgrade

    - by Calmarius
    I have a Dell Optiplex GX520 desktop (it's abount 5 yr old) PC with 512 MB DDR2 RAM. Since my computer always swapping I thought I should upgrade my RAM. I bought a Kingmax 2GB DDR2 RAM. But my system does not boot. The status leds are on 2 and 4. The user manual says 'video card failure' wtf? I put back the original module and everything works. I tried many combinations. When I leave the old 512 RAM in and put the 2GB next to it to the other socket my system completes the POST and I'm able to enter the BIOS menu. It says my system has 2.5 GB installed, one 0.5GB and one 2GB in dual asymmetric channel mode. It's seemingly right. Exiting the BIOS setup GRUB loads successfully, but when I try to boot Ubuntu it crashes with kernel panic immediately. Trying to load Windows XP does not get past the loading screen, it crashes with 0x8E stop error. Does this mean the ram I bought is faulty? Or is it just mean that the memory module I bought is too new to be handled my computer? I this case I may exchange the RAM with my friends. No other computer is in my house (my very old box has DDR1 ram, my systers new box has DDR3 ones. I can't plug my memory in neither one.) I'm going to return the RAM to store to replace it with a better one tomorrow. Is there any hope to get this new module work?

    Read the article

  • How do I collect SNMP readings from intermittently-connected sites?

    - by Luke404
    I am collecting SNMP data on-site for a number of systems, currently using Cacti. These systems are spread on a number of sites that aren't always connected to internet, but I also need to centralize the data on a single system (datacenter housed server) and get graphs out of it. If I directly poll remote systems with a centralized Cacti I'd loose data when a site is not connected to internet. I should record data on-site (I have a server at each site and I can run whatever I want on it) and then 'sync' everything to the central system. One hack could be a cacti or directly an rrdtool on site and then periodically rsync RRD data to the central Cacti system, but that doesn't sound like a 'clean' solution: every RRD would have to be defined at both places and rsync scripts setup with the specific file names. Can you suggest a better solution? Cacti is not a requirement but I'd like to use something like that on the central system. On-site systems need only to collect data I don't need to graph it there or manage users rights to view data and stuff like that, users will only access the centralized system.

    Read the article

  • Windows 7 hangs after going into sleep a second time

    - by Brian Stephenson
    I've searched everywhere around Google and can't figure out why this is happening so I decide to ask here to see if anyone has a problem like this. Like it says in the title, whenever I sleep ONCE I'm able to wake the system, but going back to sleep again AFTER waking up for the first time results in it hanging on no input and no output, with the fan spinning as fast as possible and alot of heat being spewed out by the fan as well. I've tried various things like setting all USB Hub Root's to not get switched off for power saving, disabling USB selective suspend, disabling PCI-e link state power management, and even unplugging ALL USB devices and it wont wake up after the second attempt. And I've even waited up to a full hour of the CPU fan spinning loudly and it's still stuck trying to wake up. The only USB devices I use are a Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro) and a generic HID compliant mouse from Creative model number OMC90S "CREATIVE MOUSE OPTICAL LITE". My other devices like external drives and controllers are unplugged when I'm not using them as having too many USB devices plugged in at a time causes a deadlock on almost all of the ports I have. Here's my system specifications (Most of these are from CPU-Z): Brand: Gateway DX4300-19 Mainboard: Gateway RS780 Chipset: AMD 780G Rev 00 Southbridge: AMD SB700 Rev 00 LPCIO: ITE IT8718 BIOS: American Megatrends Inc. ver P01-A4 09/15/2009 CPU: AMD Phenom II X4 810 at 2.60 GHz RAM: 8.0 GB DDR2 Dual Channel Ganged Mode at 400 MHz GPU: ATI Radeon HD3200 Graphics Intergrated - RS780 OS: Windows 7 Home Premium x64 OEM (Acer Group) HDD: WDC WD10EADS-22M2B0 1.0 TB (Western Digital Green Caviar) My BIOS has absolutely no control over how I setup the sleep mode to be either S1 or S3. So I can't check these settings or even change them. Hybrid sleep is also disabled, I can successfully go into hibernation and wake from hibernation but this is painfully slow due to a harddrive problem I'm having with this "Green Drive". (Hibernation takes over ~3 minutes to complete) Any help would be appreciated, thanks.

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Creating Routes using the second NIC in the box

    - by Aditya Sehgal
    OS: Linux I need some advice on how to set up the routing table. I have a box with two physical NIC cards eth0 & eth1 with two associated IPs IP1 & IP2 (both of the same subnet). I need to setup a route which will force all messages from IP1 towards IP3 (of the same subnet) to go via IP2. I have a raw socket capture program listening on IP2 (This is not for malicious use). I have set up the routing table as Destination Gateway Genmask Flags Metric Ref Use Iface IP3 IP2 255.255.255.255 UGH 0 0 0 eth1 If I try to specify eth0 while adding the above rule, I get an error "SIOCADDRT: Network is unreachable". I understand from the manpage of route that if the GW specified is a local interface, then that would be use as the outgoing interface. After setting up this rule, if i do a traceroute (-i eth0), the packet goes first to the default gateway and then to IP3. How do I force the packet originating from eth0 towards IP3 to first come to IP2. I cannot make changes to the routing table of the gateway. Please suggest.

    Read the article

  • Configure php mail() on Windows/IIS

    - by Adam Tuttle
    I have a Windows Server 2003 / IIS web server running various application servers, and ended up begrudgingly adding PHP into the mix. I know Win/IIS isn't the ideal environment for PHP, but it's what I've got and I need to make it work. From phpinfo(): Configuration File (php.ini) Path: C:\WINDOWS Loaded Configuration File: C:\php\php.ini From C:\php\php.ini: [mail function] ; For Win32 only. SMTP = localhost smtp_port = 25 ; For Win32 only. ;sendmail_from = [email protected] ; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ;sendmail_path = ; Force the addition of the specified parameters to be passed as extra parameters ; to the sendmail binary. These parameters will always replace the value of ; the 5th parameter to mail(), even in safe mode. ;mail.force_extra_parameters = Lastly, I have IIS setup to run an SMTP relay that allows connection and relay, but only from localhost. But when I try something that uses mail(), I get this error: The e-mail could not be sent. Possible reason: your host may have disabled the mail() function... Any ideas?

    Read the article

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • How to re-add a RAID-10 failed drive on Ubuntu?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >