Search Results

Search found 11195 results on 448 pages for 'disconnected environment'.

Page 346/448 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Default Gateway solution on NAT'd network (best options)

    - by kwiksand
    I've recently changed a network from a bunch of machines exposed to the net on a network to a more security conscious Firewall-fronted network with a DMZ for public services. Everything's mostly working perfectly now, but I've got the old problem of NAT Loopback where a machine within the LAN wants to access a public service via the public/external IP. I've solved this problem previously in a small/SOHO environment simply using NAT loopback features of the router in use or a simple iptables rule to do the same, but I want to make sure I make the most resilient choice with the least concern. It seems I can: Use iptables as I've said to DNAT and MASQUERADE the change source/destination so the connection works correctly i.e iptables -A PREROUTING -t nat -d ip.of.eth0.here -p tcp --dport 8080 -j DNAT --to 192.168.0.201:8080 iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -p tcp --dport 8080 -d 192.168.0.201 -j MASQUERADE Use split DNS, with internal mappings for public IP's Potentially do some route nastyness by setting the Default Gateway to use a different externally exposed IP to then come back in the public route (messy) Someone mentioned putting the Default Gateway within the DMZ as well (on serverfault), but I can't find the post again. I'm sure this is a common issue for many with NAT'd networks, but I've not really seen the perfect solve all when it comes to fixing this problem. What is your opinion?

    Read the article

  • windows 2003 under Hyper-V - can't send/receive ping

    - by glaucon
    I've installed Windows 2003 x64 R2 SP2 under Hyper-V (the Windows Pro 8 edition). I have a NIC configured but I can't move any traffic on it. In particular I can't send or receive Pings. Scoreboard There is a second VM running Ubuntu under the Windows 8 host which is able to send and receive pings from the host O/S . When I try to ping from Windows 2003 guest to Windows 8 host I get 'Request Timed Out'. When I try to ping from Windows 8 host to Windows 2003 guest I get 'Reply from 192.168.10.107 Destination Host Unreachable'. There's no problem pinging from the Ubuntu guest to the Windows 8 host and no problem pinging from the Windows 8 host to the Unbuntu guest. Environment Integration services are installed on Windows 2003. The windows 2003 needs a static IP address of 192.168.10.15. The Windows 2003 ipconfig output looks like this : While the host o/s ipconfig output looks like this : Event Logs The only things I can see in the event logs which is (a) looks signifcant and (b) is not related to the lack of networking is this : I'm not sure if that's significant or not. Hyper-V and NICs When the Windows 2003 guest was first booted it had no NIC; I subsequently added a 'Legacy Network Connector' which I couldn't get Windows 2003 to recognise; I subsequently removed that and added a 'Standard Network Connector' and at least on the surface this works ... only it doesn't. 'Virtual Network Type' is external. Although I've only mentioned ping there's no other evidence of network activity. 'Allow incoming echo request' is enabled on the Windows 2003 guest. HELP ? What else should I look at or do to resolve this problem ? EDIT 1: I should have said that I turned off the firewall on the W2003 server for a while and retested the pings; same result.

    Read the article

  • Need clear steps on how to convert a Windows 2000 Server to a XenServer VM

    - by Jay
    The source system is not local. The target host running XenServer is not local. The source system is running Windows 2000 Server SP4 and has 1 disk split into 6 partitions, all NTFS: C: 6 GB (boot) D: 15 GB E: 6 GB F: 6 GB G: 5 GB H: 26 GB Most of the partitions are mostly mostly full ( 60%). What is the most straightforward way to do a P2V migration of the server? I can do minor database & data syncs after the P2V is successful & running as a VM within XenServer, it's just getting to that point which is not clear. The option of installing a Windows 2000 Server from scratch is not available, I need to convert the existing physical server as-is into a VM to be hosted within a XenServer environment. I've looked at XenConvert but it maxes out on converting only 4 partitions in one shot, and I'm not certain how to account for the 2 extra partitions. I'm not familiar with XenServer but it's my only option right now to go P2V.

    Read the article

  • Body of email breaks distribution list in exchange?

    - by widgisoft
    Hi, I have a very odd problem that I'm not sure is a programming issue or a server issue :-p. Basically I'm sending an email to an exchange distribution list that includes a PHP stack trace; during certain faults the trace includes really high level information such as the machine's environment variables (during file reads, etc.). I went through a copy of the email line by line until the email sent and it appears the line: [SUDO_COMMAND] => /etc/init.d/httpd restart is the culprit. Adding a string replacement in before the email is sent allows a successful send. What I don't understand is WHY these stream of characters are causing the issue ONLY on the distribution email. If I send the email to myself as well, i.e. "[email protected]; [email protected]", then I get the email fine. Re-ordering the list doesn't make a difference the group never gets the email. Because the individual gets the email and not the group I'm assuming the fault is with exchange and some rogue filtering - I've gone through it with the sysadmins and there's no filtering of any sort on that group... so maybe it's a bug? I can't find anyone else having recorded this specific fault so I figured I'd open it here. For now I'm just not using the distribution list but it'd be nice to eventually find the solution. Many thanks, Chris

    Read the article

  • security update in centos, which way is it?

    - by user119720
    Recently something have been bothered with my mind regarding my linux CentOS box.My client have been asking to set up a CentOS machine in their environment which works as server. One of their requirement is to make sure that the set up is to be as secure as possible. Mostly have been covered except the security update inside CentOS. So my question are as follows: 1.. How to apply the latest security,patches or bug fixes in CentOS? When doing some research, I've been told that we can update the security of CentOS by running yum install yum-security but after install this plug in,seems there is no output for this method.Its like this command is not working anymore. 2.. Can i update the security patches through rpm packages? I couldn't find any site that can download the security patches,enhancement or bug fixes for CentOS.But I know that CentOS have been releasing these update through their CentOS announcement here It just it lack of documentation on how to apply these update into my CentOS installation. For now the only way that I know is to run yum update I am hoping that someone can help me to clarify these matter.Thanks.

    Read the article

  • Problems with image/file upload in MediaWiki on Windows 2008 Server R2, using wrong temp directory

    - by Lasse V. Karlsen
    I have installed MediaWiki 1.15.2 under IIS as per the MediaWiki installation instructions for Windows 2008 Server. I have configured PHP to use a specific temp directory: upload_tmp_dir="C:\php\uploadtemp" I have specified that MediaWiki is allowed to upload: $wgEnableUploads = true; But when I try to upload an image, I get this error message in my browser: Internal error Could not find file "C:\Windows\Temp\php1AEA.tmp". Retrying will simply give me a new filename, but in the same location. The directory does not have any php* files in it, but since they're "temporary", they might be gone in a flash before Windows Explorer is able to show them so that might be a red herring. I've googled for this, and the most promising lead I found was on this page: Image upload problem - Is this bug fixed?, but since the text says "a bugfix was posted on the bug-report page", but provides no link to which bug page this relates to (php or mediawiki) nor the actual bug report, I've not found conclusively the bug report in question so that didn't help me much. Lots of pages indicates that this is a permission issue, so I tried setting permissions on c:\windows\temp as Modify by Everyone, still no dice. I tried changing the two system environment variables TEMP and TMP to point to C:\Temp instead, but MediaWiki still complains about not finding the file in C:\Windows\Temp. Note that I don't care a lot about where the files will actually be stored temporarily, so c:\windows\temp is fine by me. I do, however, care about them actually being uploaded correctly. Does anyone know of a fix, have any leads I can follow, or whatnot? The server is running Windows 2008 Server R2, all patches installed, and the PHP installed is 5.3.2, using IIS FastCGI.

    Read the article

  • Windows clients unable to access Samba share on AD joined Linux box every 7 days

    - by Hassle2
    The problem: Every 7 days, 2 Windows Servers are unable to access a SMB/CIFS share. It will start working after a handful of hours. The environment: OpenFiler Linux box joined to 2003 AD Domain Foreground app on Win2003 server access the SMB/CIFS share with windows credentials Another process on Win2008 access the share via SQL Server with windows credentials The Samba version on the Linux box is 3.4.5. Security is set to ADS wbinfo and getent return back expected users and groups Does not look to be a double hop issue as it's always the 2 accounts, regardless of the calling user. There is a DNS entry in both forward and reverse lookup zone for the linux box The linux box's computer object in active directory shows that it was modified around/at the same time that the two clients started failing to access the share Trying to access the share via IP works when by name does not Rebooting the Windows server takes care of it (it's production and only restarted it once) Restarting smbd, winbind, nmbd had no effect Error in samba log for the client in question: smbd/sesssetup.c:342(reply_spnego_kerberos) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! The Question: Does this look like the machine account password is changing (hence the AD object showing the updated modified date) or are the two windows clients unable to request a new ticket that works against this linux box?

    Read the article

  • How can I have puppet deploy ssh keys for virtual users?

    - by Pheezy
    I am trying to get puppet to assign authorized ssh keys for virtual users but I keep getting the following error: err: Could not retrieve catalog: Could not parse for environment production: Syntax error at 'user'; expected '}' at /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp:9 I believe my configuration are correct (listed below) but is there a syntax error or scoping issue I am missing? I would simply like to assign users to nodes and have those users automagically have their ssh keys installed. Is there maybe a better way to do this and I'm just overthinking it? # /etc/puppet/modules/users/virtual.pp class user::virtual { @user { "user": home => "/home/user", ensure => "present", groups => ["root","wheel"], uid => "8001", password => "SCRAMBLED", comment => "User", shell => "/bin/bash", managehome => "true", } # /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp ssh_authorized_key { "user": ensure => "present", type => "ssh-dss", key => "AAAAB....", user => "user", } # /etc/puppet/modules/users/init.pp import "users.pp" import "ssh_authorized_keys.pp" class user::ops inherits user::virtual { realize( User["user"], ) } # /etc/puppet/manifests/modules.pp import "sudo" import "users" # /etc/puppet/manifests/nodes.pp node basenode { include sudo } node 'testbox' inherits basenode { include user::ops } # /etc/puppet/manifests/site.pp import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet' } # Set global defaults - including backing up all files to the main filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • SSL in IIS 7 on a subdomain in a web farm

    - by justjoshingyou
    I have been having one of the most frustrating days in my entire IT career. I am trying to install an SSL certificate on a subdomain in a web farm. http://shop.mydomain.com needs to ALWAYS be forced to https://shop.mydomain.com I have a temporary cert issued from verisign on shop.mydomain.com I have installed the cert on the server. The website for shop.mydomain.com is set as a host header in IIS with the DNS entry pointed to the same IP as mydomain.com - which is our load balancer. I actually have 2 load balancers (as needed by our ISP). One redirects all traffic on port 80 out to the different servers on port 80. The other pushes out port 443 to the servers on port 443. shop.mydomain.com is to be the only site protected by SSL at this time. When I add the binding and I navigate to https://shop.mydomain.com it pops up with a warning about the cert being invalid (assumed because this is a test cert), and then it sends the user to http. So, I checked the box "Require SSL and it redirects to http://shop.mydomain.com/default.aspx and displayes an ASP.NET 404 error message. (not the IIS 404 error) I tried removing the binding on the site to port 80 as well with no luck. I am nearly ready to crawl under my desk into the fetal position. How on earth do I make this work? I can't even get it to work on one machine, let alone in the load balanced environment.

    Read the article

  • Windows 2008 R2 AWS CloudFormation Elastic beanstalk configuration

    - by Webmonger
    I'm looking for some configuration advice. I have a need for a load balanced windows environment with shared media across all instances that are hosting the app. The best explanation i can give is that there will be multiple Windows 2008 server with IIS hosting the app going through an ELB to load balance. Users must be able to upload content (images, video etc...) to the site that will be hosted. When a user uploads media it needs to be kept on a shared location so all windows IIS instances can access the files, I can't host the files on S3 because of the app architecture so they need to be in a place where all IIS server will have access. In addition I need to run an update each IIS server instance that updates a local memory cache when SQL data is updated. I was thinking of a configuration like this: [ELB] - [Win 2008 IIS (multiple servers)] - [Win 2008 File & SQL Server(possibly RDS?)] Does this configuration make sense? If not could you provide an idea of how I should configure it. Thanks in advance

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • Are applications optimized for XenApp?

    - by damitamit
    Our IT dept are about to deploy virtualized apps using Citrix XenApp. One of these apps will be Dynamics AX 4.0 SP2, a ERP client (which I develop on). They have supposedly reached a roadblock because an external 'Dynamics AX Consultant' has told our IT Dept that Dynamics 4 will not work optimally on Citrix and will run very slow because it is not optimized for Citrix. We have it running in a Test environment now and seems ok. They have been told that the only 'solution' is to upgrade to Dynamics AX 2009 where supposedly this problem as been fixed. (not a small task for my team!) When I heard about this, I was quite surprised. From my brief knowledge of Citrix, I thought it would be application independent. How does the citrix app virtualization work, in that a particular app would work better than others on citrix? Would the speed of the virtualized app not just depend on the resources/network connection the citrix server has? FYI, Dynamics AX is a 3-tier client/server system, so the client will be accessing a AOS application server, which then accesses the database. Please enlighten me :)

    Read the article

  • Windows 7 network performance tuning for LAN

    - by Hubert Kario
    I want to tune Windows 7 TCP stack for speed in a LAN environment. Bit of background info: I've got a Citrix XenServer set up with Windows 2008R2, Windows 7 and Debian Lenny with Citrix kernel, Windows machines have Tools installed the iperf server process is running on different host, also Debian Lenny. The servers are otherwise idle, tests were repeated few times to confirm results. While testing with iperf 2008R2 can achieve around 600-700Mbps with no tuning what so ever but I can't find any guide or set of parameters that will make Windows 7 achieve anything over 150Mbps with no change in TCP window size using -w parameter to iperf. I tried using netsh autotuining to disabled, experimental, normal and highlyrestricted - no change. Changing congestionprovider doesn't do anything, just as rss and chimney. Setting all the available settings to same values as on Windows 2008R2 host doesn't help. To summarize: Windows 2008R2 default settings: 600-700Mbps Debian, default settings: 600Mbps Windows 7 default settings: 120Mbps Windows 7 default, iperf -w 65536: 400-500Mbps While the missing 400Mbps in performance I blame on crappy Realtek NIC in the XenServer host (I can do ~980Mbps from my laptop to the iperf server) it doesn't explain why Windows 7 can't achieve good performance without manually tuning window size at the application level. So, how to tune Windows 7?

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • How to manually start and re-start Apache with mod_wsgi powering a password protected Python WSGI app?

    - by Mahmoud Abdelkader
    I'm working on a project where I have to meet some regulatory requirements that require at least 3 out of 5 authorized users to start a backend web service that handles very sensitive information using pre-assigned passwords. Right now, the prototype has been approved and is running using Python's wsgiref.simple_server(), which I have programmed to manually prompt for the passwords. Now that the prototype has been approved, I have to migrate the web application to a production environment where I will need to run it behind Apache and mod_wsgi. I have two questions: Right now, I use a thin Python wrapper around expect to programmatically allow for remote password entry. How do I get Apache to prompt me for a password before starting? Will this have to be in the app.wsgi script that's executed by mod_wsgi? How would that work since Apache daemonizes, and thus, has no stdin! Will I have to worry about some type of code reload? Apache probably has some maximum number of requests before it kills and restarts another worker process, but, would this require a password prompt as well?

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • Completely remove Postgres on Mac OSX Lion

    - by Nai
    I'm trying to get postgis running on my machine. Running brew install postgis seems to have installed postgres 9.2.1 on to my machine. I would like to remove my previous version 9.1.2 to keep my environment clean. Running brew uninstall postgres removes 9.2.1. What's the best way to do this? UPDATE nai@nyc ~ $ brew versions postgresql 9.2.1 git checkout ed92469 /usr/local/Library/Formula/postgresql.rb 9.2.0 git checkout 2f6cbc6 /usr/local/Library/Formula/postgresql.rb 9.1.5 git checkout 6b8d25f /usr/local/Library/Formula/postgresql.rb 9.1.4 git checkout c40c7bf /usr/local/Library/Formula/postgresql.rb 9.1.3 git checkout 05c7954 /usr/local/Library/Formula/postgresql.rb 9.1.2 git checkout dfcc838 /usr/local/Library/Formula/postgresql.rb 9.1.1 git checkout 4ef8fb0 /usr/local/Library/Formula/postgresql.rb 9.0.4 git checkout 2accac4 /usr/local/Library/Formula/postgresql.rb 9.0.3 git checkout b782d9d /usr/local/Library/Formula/postgresql.rb 9.0.2 git checkout 2c3b88a /usr/local/Library/Formula/postgresql.rb 9.0.1 git checkout b7fab6c /usr/local/Library/Formula/postgresql.rb 9.0.0 git checkout 1168d8f /usr/local/Library/Formula/postgresql.rb 8.4.4 git checkout c32bea0 /usr/local/Library/Formula/postgresql.rb 8.4.3 git checkout 237d1c5 /usr/local/Library/Formula/postgresql.rb

    Read the article

  • SSL setup: UCC or wildcard certificates?

    - by quanza
    I've scoured the web for a clear and concise answer to my SSL question, but to no avail. So here goes: I have a web-service requiring SSL support for authentication pages. The root-level domain does not have the "www" - i.e., secure://domain.com - but localized pages use "language-code.domain.com", i.e. secure://ja.domain.com So I need at least a wildcard SSL certificate that supports secure://*.domain.com However, we also have a public sandbox environment at sandbox.domain.com, which we also need to support under localized domains - so secure://ja.sandbox.domain.com needs to also work. The previous admin managed to purchase a wildcard SSL certificate for .domain.com, but with a Subject Alternative Name for "domain.com". So, I'm thinking of trying to get a wildcard certificate with SANs defined as "domain.com" and ".*.domain.com". But now I'm getting confused because there seem to be separate SAN certificates, also called UCC certificates. Can someone clarify whether it's possible to get a wildcard certificate with additional SAN fields, and ultimately what the best way is to support: secure://domain.com secure://.domain.com secure://.*.domain.com with the fewest (and cheapest!) number of SSL certificates? Thanks!

    Read the article

  • Deploying workstations - best practices?

    - by V. Romanov
    Hi guys I've been researching on the subject of workstation deployment for a while, and found a ton of info and dozens different methods and tools, but no "best practice" method that doesn't lack at least one feature that i consider required for the solution to be perfect. I'm currently interested in windows workstation deployment, but if the tools can be extended to Linux, then it's an added value. I want the deployment tools I use to be able to do the following: hardware independent - I want my image or installation to have a minimum of hardware and driver dependency, so that i can use a single image/package for all workstations easily updatable - I want to be able to update my image as easily as possible without redeploying/rebuilding/reimaging all configurations PXE bootable deployment - I want the tools to be bootable off the network so that I don't need a boot cd/DOK. scriptable for minimum human input - Ideally, the tool should run automatically after being booted and perform a "default" deployment (including partitioning) unless prompted otherwise. i.e - take a pc, hook it up, power on, PXE boot and forget about it until the OS is deployed. I found no single product or environment that does all this. Closest i came to is the windows deployment services/WIM image format. I also checked out numerous imaging and deployment tools including clonezilla, ghost, g4u, wpkg and others, but most of them lack the hardware Independence and updatability features. We currently have a Symantec Ghost server setup that does imaging over the network, but I'm not satisfied with it as it has all the drawbacks i listed above. Do you have suggestions how to optimize the process of workstation deployment? How do you deploy them in your organization? Thanks! Vadim.

    Read the article

  • Object Not found - Apache Rewrite issue

    - by Chris J. Lee
    I'm pretty new to setting up apache locally with xampp. I'm trying to develop locally with xampp (Ubuntu 11.04) linux 1.7.4 for a Drupal Site. I've actually git pulled an exact copy of this drupal site from another testing server hosted at MediaTemple. Issue I'll visit my local development environment virtualhost (http://bbk.loc) and the front page renders correctly with no errors from drupal or apache. The issue is the subsequent pages don't return an "Object not found" Error from apache. What is more bizarre is when I add various query strings and the pages are found (like http://bbk.loc?p=user). VHost file NameVirtualHost bbk.loc:* <Directory "/home/chris/workspace/bbk/html"> Options Indexes Includes execCGI AllowOverride None Order Allow,Deny Allow From All </Directory> <VirtualHost bbk.loc> DocumentRoot /home/chris/workspace/bbk/html ServerName bbk.loc ErrorLog logs/bbk.error </VirtualHost> BBK.error Error Log File: [Mon Jun 27 10:08:58 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ [Mon Jun 27 10:21:48 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/sites/all/themes/bbk/logo.png, referer: http://bbk.$ [Mon Jun 27 10:21:51 2011] [error] [client 127.0.0.1] File does not exist: /home/chris/workspace/bbk/html/node, referer: http://bbk.loc/ Actions I've taken: Move Rewrite module loading to load before cache module http://drupal.org/node/43545 Verify modrewrite works with .htaccess file Any ideas why mod_rewrite might not be working?

    Read the article

  • Missing Data on VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

  • cfengine3 file_copy only on source side change

    - by megamic
    I am using the 'digest' copy method for all file copy promises, because of the way we package and deploy software, I cant rely on mtime for the criteria for updating files. For various reasons, I am not employing the client-server approach with a central configuration server: rather we package and deploy our entire configuration module to each server, so from cf-engine's perspective, the source and target are local on the server it is running. The problem I am having with this approach is that the source will always update the target when they differ - which is what I want most of the time, usually because the source has been updated. However, like many other cfengine users, we are running an operational environment, where occasionally emergency fixes have to be applied immediately - meaning we don't have time to rebuild and redeploy a configuration module, and the fix will often be applied by deploying a tarball with specific changes. Of course this is problematic if cf-engine comes along 5 mintues later and reverts the changes. What we would like is to be able to make small, incremental changes to our servers, without them being reverted, until the next deployment cycle at which time the new source files would be copied. We do not consider random file corruption or mistaken changes to involve enough risk to warrant having cfengine constantly revert deployments to their source copy - the ability to deploy emergency fixes and have them stay that way until the next deployment would be of much greater value and utility. So, after all that, my question is this: is cf-engine capable of detecting whether it was the source or target that changed when the files differ, and if so, is their a way to use the 'digest' copy method but only if the source side changed? I am very open to other ideas and approaches as-well, as I am still quite new to this whole configuration management thing.

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >