Search Results

Search found 13411 results on 537 pages for 'proxy servers'.

Page 448/537 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • Accessing SSH_AUTH_SOCK from another non-root user

    - by Danny F
    The Scenario: I am running ssh-agent on my local PC, and all my servers/clients are setup to forward SSH agent auth. I can hop between all my machines using the ssh-agent on my local PC. That works. I need to be able to SSH to a machine as myself (user1), change to another user named user2 (sudo -i -u user2), and then ssh to another box using the ssh-agent I have running on my local PC. Lets say I want to do something like ssh user3@machine2 (assuming that user3 has my public SSH key in their authorized_keys file). I have sudo configured to keep the SSH_AUTH_SOCK environment variable. All users involved (user[1-3]), are non privileged users (not root). The Problem: When I change to another user, even though the SSH_AUTH_SOCK variable is set correctly, (lets say its set to: /tmp/ssh-HbKVFL7799/agent.13799) user2 does not have access to the socket that was created by user1 - Which of course makes sense, otherwise user2 could hijack user1's private key and hop around as that user. This scenario works just fine if instead of getting a shell via sudo for user2, I get a shell via sudo for root. Because naturally root has access to all the files on the machine. The question: Preferably using sudo, how can I change from user1 to user2, but still have access to user1's SSH_AUTH_SOCK?

    Read the article

  • copSSH and cygwin - Can't use windows style paths

    - by DrFredEdison
    I setup copSSH on one of my windows servers, and within the copSSH bash shell, I can't seem to use windows-style paths to remove and copy files. If I do try, I get the following: $ /bin/cp -r C:/Domains/_temp/collage_push/* C:/Domains/collage/ cygwin warning: MS-DOS style path detected: C:/Domains/_temp/collage_push/ Preferred POSIX equivalent is: /cygdrive/c/Domains/_temp/collage_push/ CYGWIN environment variable option "nodosfilewarning" turns off this warning. Consult the user's guide for more details about POSIX paths: http://cygwin.com/cygwin-ug-net/using.html#using-pathnames I have created a windows environment variable CYGWIN set to nodosfilewarning. It has no effect. I added export CYGWIN=nodosfilewarning to my .bashrc and doing a echo $CYGWIN in my ssh session confirms it is indeed getting set; yet again, it has no effect finally, I noted that when not doing my own export that CYGWIN contains "nontsec binmode" (no quotes), so I tried: export CYGWIN="nodosfilewarning nontsec binmode" in my .bashrc and still no dice. Older versions of CopSSH didn't have this issue. How can I actually override this error? I have a lot of scripts that already use windows-style paths, and I'd rather not change them if possible.

    Read the article

  • AMIs in Amazon EC2

    - by Jack of Trades
    I really like the Amazon EC2 environment, and thought I'll spend a bit of time playing around with various types of public (Windows!) AMI servers. But testing has been a bit, well, questionable. Some of my findings: It's very difficult to know what exactly a specific public EC2 image is supposed to be doing. Many images come with little to no information. I can't seem to find the passwords to log onto various windows images. Why are they public if they can't be used!? Lots of images are based on S3, and not EBS backed. This is very annoying, as S3 takes a lot longer to do pretty much anything (stop, image etc.) I am only testing images here, so of-course I don't question the value of S3 for other attributes. The description of what an image does is almost useless and many times confusing. Have others come across these EC2 issues. Again, my interest was to just play around with public images for testing/experimentation/etc, and therefore these issues may not be too relevant for more normal EC2 deployment uses.

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

  • Global Address List, Multiple All Address Lists in CN=Address Lists Container

    - by Jonathan
    When my colleges (that was way before my time here) updated Exchange 2000 to 2003 a English All Address Lists appeared in addition to the German variant. The English All Address Lists have German titled GAL below it. This has just been a cosmetic problem for the last few years. Now as we are in the process of rolling out Exchange 2010 this causes some issues. Exchange 2010 picked the wrong i.e. English Address Lists Container to use. In ADSI Editor we see CN=All Address Lists,CN=Address Lists Container,CN=exchange org,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain and CN=Alle Adresslisten,CN=Address Lists Container,CN=exchange org,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain. In the addressBookRoots attribute of CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain both address lists were stored as values. We removed the English variant from addressBookRoots and restarted all (old and new) Exchange servers. User with mailboxes on the Exchange 2003 now only sees the German variant. Exchange 2010 is still stuck with the English/Mixed variant as are Users on Exchange 2010. Our goal would be to have Outlook display the German title of All Address Lists and get rid of the wrong Address Lists Container.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Godaddy domain and Bluehost web hosting

    - by Digital site
    I have a domain from Godaddy and web hosting at Bluehost. I want to make this work as some people say no need to transfer the domain from Godaddy to Bluehost. I was trying to find out how to get this work out by adding name servers for Bluehost ns1.bluehost.com ns2.bluehost.com at Godaddy. This works fine, but not sure if 100% OK yet. The reason why I say that is when I type in my address name on any browser this way: mydomain.com it doesn't work. Instead I get an error message stating that this server is not found or couldn't connect to it... However, when I write the domain name and include the www. prefix it works fine... The other problem is when I search in google or yahoo, the domain shows like this: mydomain.com , which is not really good because my clients think my site is down because of the error message, and most new people don't know if they have to add www. to the domain to work. I just want to make at least the domain works like this: mydomain.com

    Read the article

  • Linux/Unix MTA with the smartest queue?

    - by threecheeseopera
    I am looking for an MTA that will allow me (a script, really) to proactively manage it's send queue in response to status codes returned by the remote servers I am delivering to. Basically, for each mail sent I would like to be able to react to the SMTP reply code returned by the remote server, ex. '250 OK', or to any error conditions like connection timeouts. Additionally, I would like to be able to manage the send queue moving forward based on this information, e.g. 'example.com has timed out the last 5 connection attempts, so no longer queue mail for recipients @example.com'. I am currently using postfix and perl to parse it's logs for this information, but I am playing a game of catchup that is prone to errors (out-of-order log entries etc.) and it's starting to get messy (some real ugly regexes ;). I really don't want to reinvent the wheel and use some language's smtp library; i would prefer to use a proven/fast/reliable MTA. I am however open to suggestions if what I need just isn't possible. Thanks for your help!

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • New Windows Server 2008 R2 WIMP running slower than Windows Server 2003

    - by starshine531
    We recently upgraded a WIMP server from Windows Server 2003 (32 bit) to Windows Server 2008 R2 (64 bit). The new server has significantly better hardware than the old server, yet many processes take much longer than the old box. We have a rather complex web application process that normally takes about 7 seconds on the old box, but on the new one it takes 11-12 seconds. That's down from 15.5 seconds it took before I disabled IPV6. This process involves some queries (some of them involve transactions with maybe 3 queries between the start and commit) and creating and emailing some pdfs. Windows updates are current with a more or less fresh machine. This happens consistently even when we have almost no traffic on the site and memory and cpu aren't being hard pressed at all. The only differences between the servers other than the OS and hardware: 1) When available, we used 64 bit versions of programs 2) The new server uses MySQL 5.5 rather than MySQL 5.1 (I did run the mysql_upgrade program and we use InnoDB for the engine) 3) The new server uses PHP Version 5.3.18 rather than PHP Version 5.3.1 4) With the new OS came IIS7 rather than IIS6 of course. What could be causing better hardware to run so much slower? Let me know if you need more details. Thank you.

    Read the article

  • Remote Desktop Session Black after Minimize

    - by TorgoGuy
    PROBLEM: When I minimize a remote desktop session and restore it, the remote desktop screen shows up black. This only happens when connecting to a particular computer. DETAILS: If I start clicking around in the black area, portions of the screen will start redrawing and showing up correctly. For example, if I leave a window open in the remote session and click where that window is located on the remote computer, then that window--and only that window--will redraw, and sometimes a portion of that window won't redraw (usually the toolbar). And to clarify--the window only has to be minimized momentarily, so it doesn't seem to be a timeout issue. Clicking or typing in the remote session still causes the remote computer to respond appropriately. Disconnecting from the session and reconnecting restores the whole screen image, as does clicking all over the place in the black image (causing each section to redraw). CONFIGURATION: This problem only happens for me when connecting to a particular computer (a W2K Server box configured to allow remote administration) and only with certain client computers. I've tried 7 different client computers with various versions of Remote Desktop (the OSes were: Win2K, Server 2003, Server 2008, Windows 7 RC, 3 XP) and two of them exhibit the problem (one is one of the XP boxes and the other is Windows 7). Those same computers can RDP to other computers without problem. RESOLUTION ATTEMPTS: I have tried the following: Disable the LOCAL screen saver as mentioned on Technet Turned off bitmap caching in the client, as mentioned on many forums. Updated to version 6.1 of the remote desktop client Using mRemote (I doubted this would work since it uses MS's code for connecting to RDP servers) Turning off all video acceleration. QUESTION: Any ideas on what is causing this?

    Read the article

  • Steps after installing vCenter Server?

    - by goober
    I'm working with: Two new ESX servers that I'm configuring A new Server 2008 R2 machine that I'm using for vCenter. I took the following steps: Installed the Hypervisor on the 2 ESX machines Checked their setup/connectivity (appears to be fine; can ping, etc.) Installed vCenter Server on the Win2k8R2 box. This included the install of a SQL Express database (we're a small shop) FYI, I changed some of the ports (443 -- 8443, 80 --8080, etc.) Installed vCenter Web Client Server on the Win2k8R2 box Problems my vSphere Client on my Desktop fails to connect. Part of this is that it asks me for a username and password, but I don't recall specifying one when I set up the install. I receive the error "vSphere Client could not connect to [machinename]. An unknown connection error occurred. (The request failed because of a connection failure. (Unable to connect to the remote server))" I have also tried to use local machine admin credentials, including the format machinename\localuseracct. I have also tried using my domain credentials which are an admin for that box. I have also checked and the service is running. I also tried to connect via vSphere client locally installed on the server. It translates "localhost" to the correct name but gives the same error. I cannot register the vCenter server from the vCenter Web Client Server. I'm not sure if this is necessary, as they're both on the same machine, but it seems like the logical next step. I also receive a "failed to connect" error in this case as well. FYI, both the vCenter server and the vCenter Web Client Server are installed on the same Win2k8R2 server. What am I missing here? What is the best way to test in this case?

    Read the article

  • How to configure TFTPD32 to ignore non PXE DHCP requests?

    - by Ingmar Hupp
    I want to give our Windows guy a way of easily PXE booting machines for deployment by plugging his laptop into one of our site networks. I've set up a TFTPD32 configuration which does just that, and our normal DHCP server ignores the PXE DHCP requests due to them having some magic flag, so this part works as desired. However I'm not sure how to configure TFTPD32 to only respond to PXE DHCP requests (the ones with the magic flag) and ignore all normal DHCP requests (so that the production machines don't get a non-routed address from the PXE server). How do I configured TFTPD32 to ignore these non-PXE DHCP requests? Or if it can't, is there another equally easy to use piece of software that he can run on his Windows laptop? Since the TFTPD part is working fine, a DHCP server with the ability to serve PXE only would do. Worst case I'll have to set up a virtual machine with all this, but I'd much prefer a small, simple solution. I'm not interested in solutions that involve using the existing DHCP servers or separating machines on the network for deployment, the whole point is to be simple and stand-alone.

    Read the article

  • Azure load-balancing strategy

    - by growse
    I'm currently building out a small web deployment using VM instances on MS Azure. The main problem I'm facing at the moment is trying to figure out how to get the load-balancing to detect if a particular VM has failed and not route traffic to that VM. As far as I can tell, there are only only two load-balancing options: Have multiple VMs (web01, web02, web03 etc.) within the same 'cloud service' behind a single VIP, and configure the endpoints to be load balanced. Create multiple 'cloud services', put a single web VM in each and create a traffic manager service across all these services. It appears that (1) is extremely simplistic and doesn't attempt to do any host failure detection. (2) appears to be much more varied, but requires me to put all my webservers in their own individual cloud service. Traffic manager appears to be much more directed at a geographic failover scenario, where you have multiple cloud services across different regions. This approach also has the disadvantage in that my web servers won't be able to communicate with my databases on internal IP addresses, unlike scenario (1). What's the best approach here?

    Read the article

  • attach / detach mssql 2008 sql server manager [SOLVED]

    - by Tillebeck
    An external consult wrote a guide on how to copy a database. Step two was detach the database using Sql Server Manager. After the detach the database was not visible in the SQL Server Manager... Not much to do but write a mail to the service provider asking to have the database attached again. The service porviders answer: Not posisble to attach again since the SQL Server security has been violated". Rolling back to last backup is not the option I want to use. Can any one give feedback if this seems logic and reasonable to assume that a detached database in a SQL Server 2008 accessed through SQL Server Manager cannot be reattached. It was done by rightclicking the database and choosing detach. -- update -- Based on the comments below I update the question with the server setup. There are two dedicated servers: srv1: Web server with remote desktop and an Sql Server Manager srv2: Sql server that can be accessed through the Sql Server Manager on the web server -- update2 -- After a restart of the server the DBA could suddenly do the attachment of the database. And I guess that after the restart it was a simple task. So all of your answer were rigth! It seems that I can only mark one as a correct answer so I marked the first answer correct. But all are correct answer. Thanks a lot. Without posting the link to this thread then we might had so suffer while watching our database beeing restored by a backup :-) Thanks a lot. BR. Anders

    Read the article

  • Launching Installer Via Powershell and WinRM and Nothing Happens

    - by Nick DeMayo
    I'm currently working on a Powershell script to run some Microsoft Hotfix installers remotely on several Windows Server 2008 R2 servers that I manage. Basically, the script copies all the appropriate files up to the server, and then runs the installer via Invoke-Command, like so: function InstallCU { Write-Host "Installing June 2013 CU..." Invoke-Command -ComputerName $ServerName -ScriptBlock { Start-Process "c:\aaa\prjcusp2\ubersrvprj2010-kb2817530-fullfile-x64-glb.exe" -ArgumentList "/passive" } } If I run the "Start-Process" command locally on the server, the installer runs properly. However, when trying to run it remotely, nothing happens (actually, I can see the installer start up in Task Manager, but it closes a couple seconds later and doesn't run). I've attempted giving the Invoke-Command -Credentials, I've turned off UAC on the server, and I've ensured that my WinRM settings (running 'winrm quickconfig' and setting TrustedHosts to *) are correct. I've also tried having the Invoke-Command script run a local Powershell script to run the installer and changing the Argument from '/passive' to 'quiet' (in case it can't remotely launch something that has a UI), but again, no dice. Is there anything else I can try, or am I just not going to be able to do this?

    Read the article

  • Purge print driver cache on windows 7 with powershell script

    - by Doltknuckle
    [Background] We have been having trouble with our network clients suddenly being unable to print. They get an odd error with a hex code. We determined that something in the driver was messed up and we could resolve the issue by clearing the driver cache and reinstalling the driver. This happens to random computers every so often. We're assuming this is a bug with the latest Dell 2330dn driver since that is the only model that has this problem. [Problem] What we are looking to do is write a Powershell script that would clear the driver cache and redownload the driver. I see a ton of scripts out there to manage queues, servers, and ports, but nothing for local driver cache management. [Current Workaround] Since we have to do this manually, I'll write out the steps so you know what we want this script to replicate. Disable print spooler Restart machine Delete contents of: C:\windows\system32\spool\drivers\w32x86 Enable print spooler and start service. Delete the network printer object and re-add network printer off of server. [Request] I'm good enough with powershell to translate the above workaround into a pair of scripts. I'd like to find a more elegant solution then my current workaround. Any suggestions?

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

  • Windows 7 - Windows XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >