Search Results

Search found 6069 results on 243 pages for 'tvlife admin'.

Page 194/243 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • Painfully slow login to AD bound Mac OS X Leopard machine when off home network

    - by GeeBee
    Dear all Just looking for a little help with this problem that seems to trip a lot of people up and is causing me no end of grief. I have a number of fully patched OS X Leopard machines that are bound to my AD (Server 2003). When on the home network, logging in seems swift and works as expected. When users take the machines off site, login can take 5 minutes or more. The user adds correct credentials but the desktop does not appear for a very long time. Outside the office, I have tried logging in using a local Admin account, switching off Airport and then logging in using an AD account. In this situation login is immediate again. It all seems as if Leopard is finding a suitable wireless network, spending far too long looking for the Domain before eventually giving up and using the cached credentials instead. I have read that disabling Bonjour on the machine will stop this problem (i have not yet tested) http://www.macwindows.com/leopardAD.html#111607z ...but I am reluctant to use this "Solution" as I would like to be able to use Bonjour on the local network as well as having AD-bound machines. However, is disabling Bonjour really the only answer? Is there not some time-out setting somewhere that could be amended to stop Leopard spending forever looking for home? Any help would be very gratefully received Thanks Gordon

    Read the article

  • Installing WinPcap on Windows 8

    - by Dave Robinson
    I know there has been lots of discussion already on installing WinPcap on Windows 8. I'm running the RTM version. I was able to install WinPcap without a hitch by using the Windows 7 compatibility mode. Since then, I've noticed that WinPcap has stopped running and is actually no longer even installed. I tried installing it again, but now it continues to tell me that WinPcap does not work with my version of Windows. Compatibility modes and admin privileges make no difference. The only thing I remember doing to my system was installed 900MBs of Windows Updates. Does anyone have any ideas about what I might do to get WinPcap installed? I've already ensured that the compatibility mode settings I changed were in effect for all users. I've already ensured that "run this program as an administrator" is checked on the compatibility tab for all users. I've also tried installing WinPcap 4.1.2 and 4.1.1. No success with either.

    Read the article

  • Cannot connect with Cisco VPN but can connect with ShrewSoft VPN

    - by rodey
    EDIT: We connected an air card to the computer to use a different Internet connection and using the Cisco software, we were able to successfully connect to our VPN server. I just don't understand why the ShrewSoft VPN client would connect but the Cisco connection won't. I'm not our network admin so sorry if I butcher some of the terminology. I have a computer at remote site that connects to our network through Cisco VPN. It uses the Cisco VPN software to do so. The problem is that the computer at this site cannot connect to our VPN because it is getting error "Reason 412: The remote peer is no longer responding." To see if perhaps something on their network was blocking the connection, I installed the ShrewSoft VPN client on the computer, imported our .pcf file and connected with no problem. I have tried two different versions of the Cisco VPN software (4.8.0.* and 5.0.03.*) and have the same problem. I installed Wireshark on the computer and have confirmed (while trying to connect through Cisco) that the computer is trying to contact the VPN server but is not receiving a response. We are not having any other problems regarding users not being able to connect. I'm at a loss at what else to check. I'll be monitoring this and have access to the computer at any time.

    Read the article

  • Cannot increase Datastore

    - by k4w4zz
    Hello, We have an ESX 4.0 cluster with 2 hosts, EMC Clarion SAN storage with 10 LUNs. We have added 2 new 400 GB LUNs. All the LUNs are visible from both hosts. I have extended an existing 500 GB datastore with one of these 400 GB LUNs - the new datastore size is now 900 GB. I'd like to do the same operation with the second 400 GB LUN to extend another existing datastore but I'm not able to do it. The LUN is available to create a brand new datastore but is not visible to extend an existing one. I don't understand why everything was fine with the other one and why can't I do the same exact operation with this LUN. The result is the same on both hosts. The SAN admin have erased and re-created several times this LUN. I have rescan the HBA each time. In attachment you can find the result of the esxcfg-mpath -l and fdisk -l commands on both servers. Does somebody have an idea please?

    Read the article

  • .htaccess rewrite rule to ignore a directory

    - by Kirk Strobeck
    I am running a Symphony installation out of the directory symphony but I want to remove that word from the URL in specific cases. When a user visits http://domain.com/demo It should go to http://domain.com/symphony/demo because I've added a specific rule for demo. If I haven't added a specific rule for demo in the .htaccess, then it should resolve to http://domain.com/demo as typed. This will route it to another part of our app. Here is my current rewrite rule ### Symphony 2.3.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / ### SECURITY - Protect crucial files RewriteRule ^manifest/(.*)$ - [F] RewriteRule ^workspace/(pages|utilities)/(.*)\.xsl$ - [F] RewriteRule ^(.*)\.sql$ - [F] RewriteRule (^|/)\. - [F] ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [B,L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### URL Correction RewriteRule ^(symphony/)?index.php(/.*/?) $1$2 [NC] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ###### How would I change the rewrite rule to support those cases?

    Read the article

  • no mails routed to/from new Exchange 2010

    - by Michael
    I have an Exchange Server 2003 up and running for years. Now I am in the mid of transition to Exchange Server 2010, I already installed it, put the latest Servicepack on it and everything seems fine, BUT: Mails do not get delivered to MailBoxes on the new Exchange 2010. e.g. when I create a new mailbox on the old server, Emails in and out to/from it work like a charm. But as soon as I move it to the new server, emails get stuck. Noe delivered from outside or old mailboxes, not send out from the new server to enywhere. Sending between Mailboxes on the new Server of course is working. I can see the connectors between old and new Server in the Exchange 2003 Admin Tool, but I cannot find these nowhere on the new server. I have also setup sending connectors at the new server to send out mails directly, but that does not work. In all other areas, the servers are perfectly working together - moving mailboxes between, seeing each other etc. "just" they dont exchange (!) any emails - Any ideas what I missed? I also followed the hints from: Upgrading from Exchange 2003 to Exchange 2010, routing works in one direction only There Emails were transported at least in one direction, in my case they are not transported at all. Both my connectors are up and valid abd have the correct source/target shown on Get-RoutingGroupConnector | FL Kind regards Michael

    Read the article

  • Why would Windows Task Scheduler spawn multiple instances of the same task that run into each other?

    - by swagner88
    Overview: I use Windows Task Scheduler to run automated tasks. Occasionally I will see that randomly a task has failed to perform its duties. When I check Task Scheduler to see what has occurred in the history log, I see that for some reason, when the tasks are triggered at their schedules, they are spawning several instances of themselves simultaneously which turns into a train wreck for the task and it either kills the other instances and tries to run the "first" one, or it just does not run at all as it believes another instance of itself is already running. Sometimes this occurs in the same tasks and then occasionally it happens with others. The fix is just to end all instances and start the task manually. Question: Why would one single task with one single schedule decide to spawn multiple instance of itself simultaneously? Note: I've got a separate user account set to run the tasks instead of myself. That user is indeed an admin on the machine that runs the tasks and the tasks are set to tun whether or not the user is logged on. Also, the machine is windows server 08 R2.

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • smbclient timing out

    - by Sam Lee
    I am trying to set up a Samba share on a Centos machine. I want to connect to this server using smbclient on OS X. Here is what happens: > smbclient -L X.X.X.X timeout connecting to X.X.X.X:445 timeout connecting to X.X.X.X:139 Error connecting to X.X.X.X (Operation already in progress) Connection to X.X.X.X failed What could be going wrong? Here is my iptables dump on the Centos machine (the server): > iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 And finally, my smb.conf: [global] workgroup = workgroup security = SHARE load printers = No default service = global path = /home available = No encrypt passwords = yes [share] writeable = yes admin users = myusername path = /home/myhome/ force user = root valid users = myusername public = yes available = yes

    Read the article

  • Setup Version Control on Dreamweaver

    - by John Isaacks
    I have a win computer on the Network called WIN2K8FS1 I have TortoiseSVN on a win computer and when I go to checkout a repository with Tortoise it asks me for the URL of the repository. I put in: file://WIN2K8FS1/Media/SVN_repo And it creates the working copy. I am trying to setup Dreamweaver CS5 to work with subversion. I create a new site and I go to the Version Control tab and it asks for a lot if info. First is Access. I choose Subversion since that is the only option Second is Protocol. Not sure which I need so I go with HTTP? Third is Server Address. I am assuming this is the name of the computer with the repository so I put in \\WIN2K8FS1\ Fourth is Repository Path. I put in /Media/SVN_repo Fifth is Port which I leave default to 80 Then it asks for user name and password. I never set one up for anything so I put in my domain username and password. I click test and it tells me: Server and project are not accessible! I am not sure what I am doing wrong. I am not the server admin but I did create the repository and have access to it via Tortoise. So I am not sure what I am doing wrong in Dreamweaver.

    Read the article

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • Apache2 shared server: default webpage

    - by Eamorr
    Greetings, I have an apache2 server with 4 domain names point to my server's single IP address. When I type in www.site1.com it serves pages from /home/eamorr/site1/index.php Same for www.site2.com, www.site3.com and www.site4.com However, when I type in to the address bar of a browser without the www, it always redirects to site1.com! i.e. site1.com - site1.com site2.com - site1.com site3.com - site1.com site4.com - site1.com How do I configure apache to do the following: site1.com - site1.com site2.com - site2.com site3.com - site3.com site4.com - site4.com Here is my default config: ServerAdmin [email protected] ServerName www.site1.com DocumentRoot /home/eamorr/sites/site1.com/www DirectoryIndex index.php index.html <Directory /home/eamorr/sites/site1.com/www> Options Indexes FollowSymLinks MultiViews Options -Indexes AllowOverride all Order allow,deny allow from all php_value session.cookie_domain ".site1.com" #Added by EOH for redirection RewriteEngine on RewriteRule ^([^/.]+)/?$ driver.php?uname=$1 [L] </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined I'd like to look at the domain name and then redirect to www.sitex.com. Is there an Apache rule to do this? I hope someone can help. My SysAdmin/apache2 config skill aren't the best. Many thanks in advance,

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Export-Mailbox Error

    - by tuck918
    All, I am using export-mailbox to move some data and it is working fine until I get this error: StatusMessage : Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000 This is the command I am using: export-mailbox -identity mailboxA -targetmailbox mailboxB -targetfolder folderA -allowmerge We are on SP2 and I am running this under an account that is not a domain or enterprise admin. THe account has Exchange Server Administrator Permission Both Source and Target Exchange Mailbox Server. THe account is part of the Local Administrators Group Member Both Source and Target Exchange Mailbox Server. This account has Full Access permission on both the target and source servers. THe issue happens at any time and I am only trying to run this on one mailbox, the only mailbox I need to run it on. THe event log is "Error Exchange Migration Export Mailbox Event 1008". The log under migration logs just shows that it was running okay then it gives the same error as above "Error was found for mailboxA ([email protected]) because: Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000, error code: -1056749164" Any ideas on what to do/try?

    Read the article

  • What does a connection timeout indicate when performing an NFS mount?

    - by DeeDee
    We have a shiny new QNAP NAS (TS-879U-RP), and I'm trying to mount it to our big ol' RHEL server in the same manner as our other two QNAP NAS devices. The IT department won't give me the root privileges to the NAS, so I can't SSH in (I know, I know). The first thing I did was to, via the QNAP web admin interface, create a network share named "Runs." I then added the IP of the RHEL server to the permissions list: On the RHEL server, I then added the following line to /etc/fstab: [IP of NAS]:/Runs /mnt/gsrnas3 nfs defaults 0 0 Aside from the IP and the specific mount directory name, this is how I mounted the other two NAS devices. I then created the gsrnas3 directory under /mnt/, and then ran `mount /mnt/gsrnas3' I got the following error: mount.nfs: Connection timed out My first thought is that it's a ports issue, but I don't have enough specific experience with this issue to know for sure. I have two other NAS devices by the same manufacturer already mounted to this RHEL server, so that leads me to believe the configuration issue is on the NAS side of things. I can ping the NAS device successfully from the RHEL server. Not being able to SSH into said NAS is a huge hassle, though. Any ideas?

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • PHP-FPM issue on LEMP Stack and WordPress

    - by jw60660
    I'm very much a NGINX and Server Admin beginner. I used this tutorial to install NGINX / PHP / mySQL / WordPress: C3M Digital Tutorial In this tutorial the backend php-cgi setup is configured using fastcgi. php5-fpm was installed during this tutorial: apt-get install nginx-full php5-fpm php5 php5-mysql php5-apc php5-mysql php5-xsl php5-xmlrpc php5-sqlite php5-snmp php5-curl After reading that the NGINX configuration on the WordPress codec was more secure than most tutorials, I decided to use the codex configuration: WordPress NGINX configuration in Codex The Codex configuration uses php-fpm for backend php-cgi. When opening the browser I got a 502 Bad Gateway error. The error log was: "2012/06/10 21:18:27 [crit] 14009#0: *4 connect() to unix:/tmp/php-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 12.3.456.789, server: mywebsite.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", hos t: "mywebsite.com"" In the main NGINX configuration file supplied by the codex I noticed the line starting "server unix:" in the upstream php block which point to the empty directory: # Upstream to abstract backend connection(s) for PHP. upstream php { server unix:/tmp/php-fpm.sock; # server 127.0.0.1:9000; } I checked the folder at /tmp and it was empty. Seems I missed configuring php-fpm to play with NGINX. Can someone point me in the right direction? Much appreciated!

    Read the article

  • Sharepoint (WSS 3.0) on SBS 2008 broken.

    - by tcv
    I recently ran the Sharepoint Products and Technologies Wizard. I had hoped this would bring up Sharepoint and allow me to access it so I could begin to learn. But it's not working. Here is some data that I hope is relevant. I am doing all my testing on the SBS 2008 server itself. I changed the hostheader in IIS to reflect an external FQDN I plan to deploy. The SBS server is remote and there are no domain-connected workstations. If I browse "localhost" SSL, I can get to the site, albeit with a self-signed cert warning. If I attempt to connect via SSL using either the internal FQDN (.local), the External FQDN (.net) or any other permutation thereof, I am prompted for credentials three times but am not allowed access. My account is a domain admin. The site is inaccessible using port 80 whether using localhost, internal FQDN (.local), and external FQDN (.net) Right now, I suspect my problem is within IIS, but I don't know. My plan to publish the sharepoint site to the web so my partner and I can check documents in/out. Can someone help me get started in current direction?

    Read the article

  • Access keystore on Sun ONE Webserver 6.1 for 2048 bit key length SSL

    - by George Bailey
    We want to get 2048 bit key length CSR requests. The browser based GUI provides us with a 1024 bit CSR and I don't know how to change that. It seems that 1024 bit key lengths will no longer supported by SSL companies. (Lower cost options only support 2048 bit. Thawte who is much more expensive say they accept 1024 for only one or two year certificates, but not 3). The legacy systems in question are running Sun ONE Webserver 6.1. Upgrading would be time consuming and we would rather not have to do that right now. We will be phasing these out but it will take awhile, so... Got it!! http://middlewarekb.wordpress.com/2010/06/30/how-to-generate-2048-bit-keypair-using-sun-one-or-iplanet-6-1-servers/ It is for the same version webserver I am using. /opt/SUNWwbsvr/bin/https/admin/bin/certutil -R -s "CN=sub.domain.ext,OU=org unit,O=company name,L=city,ST=spelled state,C=US,E=email" -a -k rsa -g 2048 -v 12 -d /opt/SUNWwbsvr/alias -P https-sub.domain.ext-hostname- -Z SHA1 Previous efforts edited out.

    Read the article

  • Domain Controller died, now get authentication boxes in IE for SDL Tridion 2009

    - by Rob Stevenson-Leggett
    We had a major network issue where our secondary domain controller (responsible for Win2k3 boxes) died and had to be rebuilt (I beleive this is what happened, I am a developer not network admin). Anyway, I am working remotely via VPN at the moment and since this happened, I am getting an authentication box when trying to access certain areas of SDL Tridion via IE (Tridion 2009 SP1 is IE only) it seems like somewhere my credentials are not being passed correctly or the ones cached on my laptop do not match the ones the Domain Controller has. This only seems to affect Windows 2003 servers. Our IT support thinks that the only way to sort it out is to connect my laptop directly to the network. I am not planned to go to the office for a few weeks at least and this issue means I have to work with Tridion via Remote Desktop. We thought changing the password on my account might work but this didn't help. So basically my question is, is there any way I can reset my credential cache without having to reconnect to the network? Or is it IE that is causing the problem perhaps, since I can RDP to servers and use Tridion 2011 instances in other browsers fine? I am on Windows 7 using SonicWall VPN client.

    Read the article

  • svn post-commit not performing

    - by davin
    ive been sitting on this for about 7 hours, and ive aged close to 7 years... ahhh, server admin does that to me. i have svn wired through apache2 with webdav in the usual manner (basically like http://www.howtoforge.com/setting-up-subversion-with-webdav-post-commit-hook-and-multiple-sites-on-jaunty-jackalope-ubuntu-9.04). ive had endless problems with this (i didnt on my previous ubuntu server install, although this is ubuntu 10.10): this happened, and was fixed like in the post: http://stackoverflow.com/questions/2547400/how-do-you-fix-an-svn-409-conflict-error this looks like my issue, although its not my solution: http://serverfault.com/questions/135494/apache-svn-on-ubuntu-post-commit-hook-fails-silently-pre-commit-hook-permis my commit to svn works (finally). although the post-commit hook which is supposed to svn update the working copy of the repo on the server, doesn't work. the post-commit hook itself executes, and has sudo permissions (as in the setup url above. testing with whoami somelogfile.log or sudo whoami somelogfile.log shows www-data and root, respectively), although it wont perform the svn update (sudo svn update /var/www/gameServer /var/svn/gameServer.log). similar to the serverfault url above, when i perform the exact command it does update the working copy to the latest revision, just not through the post-commit hook. an age old question that is 90% of the time a permissions issue. but in pure frustration i chmod 777 lots of stuff not to mention the fact that www-data is in /etc/sudoer so it shouldnt even need that. im collapsing in front of the screen partly out of frustration and partly out of sleepiness. any direction would be appreciated.

    Read the article

  • HUDSON: how to manually encode the LDAP managerPassword?

    - by user64204
    I need to know how to manually encode the LDAP managerPassword which controls the authentication to hudson: <securityRealm class="hudson.security.LDAPSecurityRealm"> <server>ldap.example.org</server> <rootDN>dc=example,dc=org</rootDN> <userSearchBase>ou=People</userSearchBase> <userSearch>uid={0}</userSearch> <groupSearchBase>ou=Groups</groupSearchBase> <managerDN>cn=admin,dc=example,dc=org</managerDN> <managerPassword>{HOW DO I ENCODE THIS?}</managerPassword> </securityRealm> This question has already been raised here: http://jenkins.361315.n4.nabble.com/How-to-encode-the-LDAP-managerPassword-td2295570.html The answer was to configure the managerPassword field via the hudson web interface. The problem we have is that in order to configure LDAP one must be authenticated to hudson, which we cannot do because our LDAP authentication is currently broken (password mismatch between LDAP and the hudson configuration). Can someone explain how to manually encode the LDAP managerPassword? Thanks

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Firebox 1250e Core Failing?

    - by Noah
    We have 2 Firebox 1250e Core firewall boxes in our production environment, serving as an active and passive mode. A few months back, the active box was flashing a warning light, so our consultant removed it, and plugged it in to a test network. Everything appeared to be working fine, so he reloaded it into the production environment, and we didn't see any other issues. Fast forward to last week, and out network was constantly dropping connections over RDC, timing out, and performing as if there was a traffic issue. I turned off the production box and everything began to work fine immediately. At this point though, I'm not sure how to proceed. Should the box be completely replaced? Is there any recommended testing we could do to determine if there is a failure of some type with this device? Should we try upgrading the software on it? I know the environment isn't the issue, since the passive box (which is now the active one) is working fine. We'd like to have 2 in production though for safety failover purposes. I am not a network admin, but am hoping someone here might be able to provide some guidance.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >