Search Results

Search found 57106 results on 2285 pages for 'matthew baier@oracle com'.

Page 1975/2285 | < Previous Page | 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982  | Next Page >

  • Cannot Connect To VMWare Guest OS Using Either RDP or VNC

    - by Humanier
    Hi, I have a PC (Windows XP SP3) with VMWare Workstation 7 installed. The VMWare hosts Windows Server 2003 Enterprise Edition R2. RealVNC (4.1.3) is installed on both OS'es. Both of them use Hamachi2. Host OS (WinXP) also runs ZoneAlarm Firewall. Hamachi network is set as trusted. My goal is to allow RDP and VNC connections to be made to the guest OS (Windows Server 2003). Both options work absolutely fine if I connect from the host OS. However I have problems when other computers from our Hamachi network try to connect the guest OS (Win2K3). 1) RDP connections. RDP window opens, shows black content and after 15-20 seconds displays following error: http://lh6.ggpht.com/_yQhsRRimgKU/TArRrtiteQI/AAAAAAAABZA/e96za-y9wzo/rdp_error.JPG 2) RealVCN connections. Users are able to connect but all they see is a black screen inside VNC window. At the same their input (keystrokes or mouse moves/clicks) are visible when looking at the console window of the Win2K3. I really appreciate any ideas on how to resolve mentioned problems. Thank you in advance.

    Read the article

  • User-friendly program to create editable & searchable pdf files, like tax & application forms and su

    - by Nick Gorbikoff
    Hello. Can somebody recommend user-friendly program that will create ( or convert from Excel & Word, or OpenOffice) editable pdf forms. You know like tax forms, that some of us filled out. Where you can create a form with predefined format and stationary, but let user edit/fill out fields. I need something user-friendly that a regular person can use. I'm NOT looking for a pdf library ( I already use wkhtmltopdf for generating pdfs programmaticaly). The reason is that we have about 400 documents ( internal expense forms, traing forms, etc) in .doc and .xls format that we want to convert to editable pdf ( so that people don't have to fill them out by hand). Coding 400 templates and then converting them using some lib or command line tool - is not my idea of fun, espsecially since those form change all the time. I'd like to just give HR and Quality department the tool, so that they can maintain those documents. I looked at everything listed on this page ( http://www.cogniview.com/convert-pdf-to-excel/post/pdf-editing-creation-50-open-sourcefree-alternatives-to-adobe-acrobat/ ), but can't find what I need. Thank you!!!

    Read the article

  • CentOS 6 LEMP update - dependency error issue

    - by Latheesan Kanes
    I have setup a LEMP server following the guide Install Nginx/PHP-FPM on Fedora 20/19, CentOS/RHEL 6.5/5.10. It's been a while since I did the setup, so I wanted to grab the latest updates from REMI repository. I ran the following command: yum --enablerepo=remi,remi-php55 update I now get these dependency related errors: # yum --enablerepo=remi update Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.nl.leaseweb.net * epel: mirror.1000mbps.com * extras: mirror.nl.leaseweb.net * remi: remi.schlundtech.de * updates: centos.mirror1.spango.com Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package chkconfig.x86_64 0:1.3.49.3-2.el6 will be updated ---> Package chkconfig.x86_64 0:1.3.49.3-2.el6_4.1 will be an update ---> Package glibc.x86_64 0:2.12-1.107.el6_4.4 will be updated ---> Package glibc.x86_64 0:2.12-1.107.el6_4.5 will be an update ---> Package glibc-common.x86_64 0:2.12-1.107.el6_4.4 will be updated ---> Package glibc-common.x86_64 0:2.12-1.107.el6_4.5 will be an update ---> Package gnupg2.x86_64 0:2.0.14-4.el6 will be updated ---> Package gnupg2.x86_64 0:2.0.14-6.el6_4 will be an update ---> Package iputils.x86_64 0:20071127-17.el6_4 will be updated ---> Package iputils.x86_64 0:20071127-17.el6_4.2 will be an update ---> Package kernel.x86_64 0:2.6.32-358.23.2.el6 will be installed ---> Package kernel-firmware.noarch 0:2.6.32-358.18.1.el6 will be updated ---> Package kernel-firmware.noarch 0:2.6.32-358.23.2.el6 will be an update ---> Package libgcrypt.x86_64 0:1.4.5-9.el6_2.2 will be updated ---> Package libgcrypt.x86_64 0:1.4.5-11.el6_4 will be an update ---> Package mysql-libs.x86_64 0:5.1.69-1.el6_4 will be updated --> Processing Dependency: libmysqlclient.so.16()(64bit) for package: 2:postfix-2.6.6-2.2.el6_1.x86_64 --> Processing Dependency: libmysqlclient.so.16(libmysqlclient_16)(64bit) for package: 2:postfix-2.6.6-2.2.el6_1.x86_64 ---> Package mysql-libs.x86_64 0:5.5.34-1.el6.remi will be an update ---> Package nginx.x86_64 0:1.4.2-1.el6.ngx will be updated ---> Package nginx.x86_64 0:1.4.3-1.el6.ngx will be an update ---> Package php-pear.noarch 1:1.9.4-20.el6.remi will be updated ---> Package php-pear.noarch 1:1.9.4-23.el6.remi will be an update ---> Package php-pecl-jsonc.x86_64 0:1.3.2-1.el6.remi.1 will be updated ---> Package php-pecl-jsonc.x86_64 0:1.3.2-2.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 ---> Package php-pecl-mongo.x86_64 0:1.4.3-1.el6.remi.1 will be updated ---> Package php-pecl-mongo.x86_64 0:1.4.4-1.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 ---> Package php-pecl-sqlite.x86_64 0:2.0.0-0.3.svn313074.el6.remi.5 will be updated ---> Package php-pecl-sqlite.x86_64 0:2.0.0-0.4.svn332053.el6.remi.5.4 will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 ---> Package postgresql-libs.x86_64 0:8.4.13-1.el6_3 will be updated ---> Package postgresql-libs.x86_64 0:8.4.18-1.el6_4 will be an update ---> Package remi-release.noarch 0:6-2.el6.remi will be updated ---> Package remi-release.noarch 0:6.4-1.el6.remi will be an update ---> Package rsync.x86_64 0:3.0.6-9.el6 will be updated ---> Package rsync.x86_64 0:3.0.6-9.el6_4.1 will be an update ---> Package selinux-policy.noarch 0:3.7.19-195.el6_4.12 will be updated ---> Package selinux-policy.noarch 0:3.7.19-195.el6_4.18 will be an update ---> Package selinux-policy-targeted.noarch 0:3.7.19-195.el6_4.12 will be updated ---> Package selinux-policy-targeted.noarch 0:3.7.19-195.el6_4.18 will be an update ---> Package setup.noarch 0:2.8.14-20.el6 will be updated ---> Package setup.noarch 0:2.8.14-20.el6_4.1 will be an update ---> Package tzdata.noarch 0:2013c-2.el6 will be updated ---> Package tzdata.noarch 0:2013g-1.el6 will be an update ---> Package xinetd.x86_64 2:2.3.14-38.el6 will be updated ---> Package xinetd.x86_64 2:2.3.14-39.el6_4 will be an update --> Running transaction check ---> Package compat-mysql51.x86_64 0:5.1.54-1.el6.remi will be installed ---> Package php-pecl-jsonc.x86_64 0:1.3.2-2.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 ---> Package php-pecl-mongo.x86_64 0:1.4.4-1.el6.remi will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 ---> Package php-pecl-sqlite.x86_64 0:2.0.0-0.4.svn332053.el6.remi.5.4 will be an update --> Processing Dependency: php(zend-abi) = 20100525-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 --> Processing Dependency: php(api) = 20100412-x86-64 for package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 --> Finished Dependency Resolution Error: Package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 (remi) Requires: php(zend-abi) = 20100525-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(zend-abi) = 20121212-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(zend-abi) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(zend-abi) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Error: Package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 (remi) Requires: php(zend-abi) = 20100525-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(zend-abi) = 20121212-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(zend-abi) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(zend-abi) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Error: Package: php-pecl-jsonc-1.3.2-2.el6.remi.x86_64 (remi) Requires: php(api) = 20100412-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(api) = 20121113-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(api) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(api) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Error: Package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 (remi) Requires: php(zend-abi) = 20100525-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(zend-abi) = 20121212-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(zend-abi) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(zend-abi) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(zend-abi) = 20100525-x86-64 Error: Package: php-pecl-mongo-1.4.4-1.el6.remi.x86_64 (remi) Requires: php(api) = 20100412-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(api) = 20121113-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(api) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(api) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Error: Package: php-pecl-sqlite-2.0.0-0.4.svn332053.el6.remi.5.4.x86_64 (remi) Requires: php(api) = 20100412-x86-64 Installed: php-common-5.5.4-1.el6.remi.x86_64 (@remi-test) php(api) = 20121113-64 Available: php-common-5.3.3-22.el6.x86_64 (base) php(api) = 20090626 Available: php-common-5.3.3-23.el6_4.x86_64 (updates) php(api) = 20090626 Available: php-common-5.4.21-1.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 Available: php-common-5.4.21-2.el6.remi.x86_64 (remi) php(api) = 20100412-x86-64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Any idea how to solve these errors? Am I missing a package? or is this a bug?

    Read the article

  • Using radvd to advertise ipv6 over VPN connection using DD-wrt

    - by Sean Madden
    My ultimate goal is to allow VPN users to have access to my internal IPv6 network from across the intertubes. I've got a linksys WRT54GSv2 running DD-WRTv24SP1 and have configured the little guy as specified here http://www.dd-wrt.com/wiki/index.php/IPv6 and it works wonderfully over the br0 interface (LAN/WLAN bridge). Here's the issue though, when I add an additional interface to the radvd config file on the router (specifically ppp0, for the VPN traffic), radvd refuses to start. The kicker is that on DDWRT, it doesn't give an error message, it just fails outright. Any suggestions on where to proceed from here? /jffs/radvd.conf: interface br0 { AdvSendAdvert on; prefix 0:0:0:1::/64 { AdvOnLink on; AdvAutonomous on; }; }; interface ppp0 { AdvSendAdvert on; prefix 0:0:0:1::/64 { AdvOnLink on; AdvAutonomous on; }; }; The documentation I've found for radvd is slim, but if anyone has a decent idea on how to proceed I'd love to hear it.

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • Finding ALL currently used IP addresses of Website

    - by Patrick R
    What steps would you take to discover all (or close to all) IP addresses that are currently used by a website? How would you be as exhaustive as possible without calling a website admin and asking for the list of IP addresses? ;) nslookup works but will vary based on dns server queried. whois is another good tool. Dig, not bad. Let's use Facebook for example. I'm blocking that site for the majority our our company's users, but some are approved for "research". I can not easily use OpenDNS because we all appear to come from the same request IP address. I could change that but don't want to add more vlans than I already have. I also could use block something like regex facebook1 "facebook\.com" (I'm running a cisco firewall) but that's pretty easy to sidestep. All that being said, I'm asking about specifically about finding ip addresses for a domain and not for other methods that I can block a domain name.

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • How could Load average numbers from 'htop' exceed 100% CPU utlization?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • SQL Server 2008 Web VS SQL Server 2008 Enterprise

    - by Jeremy
    I wrote an application a few months ago, and was hosting it out of our offices on a workstation with an Intel Core 2 Quad Q8200 @ 2.33GHz, 8 GB RAM, Windows Server 2008 Enterprise and SQL Server 2008 Enterprise. Both the webserver and database server were run on the same machine. We had a huge influx in traffic, and moved ClubUptime.com, and got 2 of their top teir windows VMs. The Database server runs Windows 2008 R2 Standard and SQL Server 2008 R2 Web on 8 GB ram and an Intel Xeon e5620 @ 2.40GHz. Ever since switching, the database which used to run at around 400MB in RAM now runs at around 4-7GB, and there haven't been any changes to it (other than a couple columns here and there). Our traffic has quadrupled, and our DB is 6 GB on disk, why would SQL server take up 7 GB if the DB is only 6. And why would it be storing the ENTIRE database in memory? Another thing is why growing 4 times in size did the database's memory footprint grow 12 times? Last question: Why does the CPU peg at 100% now where it didn't before? The design is simple, VERY few joins, NO subqueries. I am just at a loss, unless it is the SQL server edition, or the fact that I moved from real hardware to a VM.

    Read the article

  • windows 7: Event 55 The file system structure on the disk is corrupt and unusable

    - by Radio
    Here is a real bad one! Windows 7 RTM with SP1 installed [Version 6.1.7601]. Recently tried to delete some folder on my hard drive and Windows prompted "Error 0x80070570: The file or directory is corrupted and unreadable", and at the same time placed an Event 55 describing "The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on \Device\HarddiskVolume2." Ran chkdsk, first with /f option, then with /r option. Result in both cases was: no errors found, 0 bad sectors. No problems chkdsk found at all! Went through StackExchange, Google and spent over 6 hours on this. Still cannot figure out the problem. Re-installing/Re-Formatting is not an option! What did I try: Hotfix - Windows6.1-KB982927-x64.msu - gave me an error about incompatibility with my computer, however it totally matches my system. CRC of hotfix was ok. Windows Repair Console found startup errors and fixed those, but this didn't help an issue, even by running chkdsk c: /R from it. http://support.microsoft.com/kb/246026 does not promise anything good. sfc /scannow does not help too. Replaced hard drive by cloning an old one using True Image, repeated all steps above. At the same time, some minor glitches started to appear in my Windows, like side panel and notification area settings are getting reset. Goal is to delete the folder and get rid of Event 55. Sounds like NTFS bug. Please help. This is completely ridiculous.

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • How can I download a copy of an S3 public data set?

    - by tripleee
    i was naively assuming I could do something like s3cmd sync s3://snap-d203feb5 /var/tmp/copy but I seem to have the wrong idea of how to go about this. I cannot even get a simple thing to work; vnix$ s3cmd ls s3://snap-d203feb5 Bucket 'snap-d203feb5': ERROR: Bucket 'snap-d203feb5' does not exist I guess the identifier I have is not for a "bucket" but for a "public data set". How do I go from one to the other? Do I have to start up an EC2 instance and create a bucket for this? How? The instructions at http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-public-data-sets.html seem to assume I want to use the data in an EC2 instance, but in this case, I'd just like to browse a bit, at least for a start. By the by, copy/pasting the "US Snapshot ID" causes a nasty traceback from Python; they publish the ID with a weird Unicode (I presume) dash which cannot directly be copy/pasted. Is there a mistake when I copy it? And what's the significance of "US" in there? Can't I use the data outside North America??

    Read the article

  • Using Confluence with virtual hosts and mod_proxy

    - by Marcus
    Hi @all, on a test server I have installed the latest version of Confluence. I configured a apache with ajp. But I have a problem, when I login in Confluence, I get the following error message: Not Found The requested URL / / homepage.action was not found on this server. The problem seems to be known, I found following Link: http://confluence.atlassian.com/display/DOC/Using+Apache+with+virtual+hosts+and+mod_proxy But unfortunately the forwards have not helped, I still get the error messages. Does anyone have any idea how I could solve the problem? The following Apache configuration I have set up: LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> Order allow,deny Allow from all </Location> ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ </IfModule>

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Windows 8.1 - Won't boot (0xc000000f), bcdedit fails

    - by user3014097
    I’m pretty much stuck at this point. So, backstory: Windows was installed on one of the SSDs currently in my tower. I bought a new SSD to install Windows (8.1 64 bit) on. Windows installation went fine, booted up, and formatted the old SSD from within Windows (this seems to have been a mistake, but I didn’t realize that at the time). Despite formatting the old SSD, whenever I tried to boot I was told that there were 2 Windows installations. Apparently, when I formatted the old drive, not all of the partitions were removed. So, I booted up with the repair utility, went into cmd, and deleted the non-primary partitions on the old SSD (there were 2 – think they were system and recovery, although I’m forgetting now). Reboot – computer won’t boot. Getting the 0xc000000f “The boot selection failed because a required device is inaccessible” error. Troubleshooting so far: Automatic repair doesn’t fix anything (I’ve never had luck with it though) If I go to install a new version of Windows, the drives and partitions are all there. The SSD is functioning, I at least know that. I’ve essentially gone through this guide: https://neosmart.net/wiki/recovering-windows-bootloader/ Unfortunately, I’m not getting anywhere. I’m not even entirely sure how to describe the errors I’m getting, so I’ve just included pictures of every step (I can't actually post them though so I just included a photobucket link). http://s319.photobucket.com/user/DGalt11/library/Computer%20Issue Am I completely screwed here (i.e. reformat and reinstall?)? Thanks

    Read the article

  • Problems with connecting Thunderbird client to dovecot installed on Ubuntu

    - by Michael Omer
    I am trying to connect a Thunderbird client to my dovecot server. The dovecot is installed on Ubuntu. I know that my server works (at least partially), since when I send a mail to a user in the server ([email protected]), I see the new file created in /home/feedback/Maildir/new. However, when I try to connect with my Thunderbird to the server, It recognizes the server, but informs me that my user/password is wrong (they are not wrong). The exact message is: Configuration could not be verified - is the username or password wrong? The server configuration it tries to connect to is: incoming - IMAP 143, outgoing - SMTP 587 The dovecot configuration file is located here: dovecot.conf My PAM configuration is: @include common-auth @include common-account @include common-session In the log, I see: May 23 06: 07: 20 misfortune dovecot: imap-login: Disconnected (no auth attempts): ? rip=77.126.236.118, lip=184.106.69.153 Dovecot -n gives me: Log_timestamp: %Y-%m-%d %H: %M: %S Protocols: pop3 pop3s imap imaps Ssl: no Login_dir: /var/run/dovecot/login Login_executable(default): /usr/lib/dovecot/imap-login Login_executable(imap): /usr/lib/dovecot/imap-login Login_executable(pop3): /usr/lib/dovecot/pop3-login Mail_privileged_group: mail Mail_location: maildir: ~/Maildir Mbox_write_locks: fcntl dotlock Mail_executable(default): /usr/lib/dovecot/imap Mail_executable(imap): /usr/lib/dovecot/imap Mail_executable(pop3): /usr/lib/dovecot/pop3 Mail_plugin_dir(default): /usr/lib/dovecot/modules/imap Mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap Mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3 Imap_client_workarounds(default): tb-extra-mailbox-sep Imap_client_workarounds(imap): tb-extra-mailbox-sep Imap_client_workarounds(pop3): Auth default: passdb: driver: pam userdb: driver: passwd

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • Windows PE network setup

    - by microchasm
    I'm walking through the following step-by-step guide for deploying Windows 7 via AIK: http://technet.microsoft.com/en-us/library/dd349348%28WS.10%29.aspx On step 4 (Capturing the Installation onto a Network Share), I run into a bit of a snag: attempting to connect to a network drive repeatedly fails. I'm using/deploying Dell Optiplex 380 64 bit machines, and the network cards seem to be really wonky. On the machine that I'm using to run AIK etc, the network driver wasn't found automatically. I had to manually go in and install the driver (which was found on the OEM installation media). I've since copied this to the USB key that I'm using for the Autounattend.xml so its handy. I think that because of this, the PE environment doesn't or can't instantiate the network device. Is there a way to install/configure the network device through the command prompt in PE? If not, I read about adding in the answer file path(s) to drivers, but if I did it this way, would I have to start the process all over again (i.e. create new Autounattend.xml with the PnPcustomizations path included, re-run the installation on the reference machine, install all the applications, re-make the PE iso, reboot into new PE iso)? Any shortcuts, direction, or advice would be appreciated. Thanks!

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

< Previous Page | 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982  | Next Page >