Search Results

Search found 10548 results on 422 pages for 'standard deviation'.

Page 299/422 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Error 0x80300001 Installing Windows Server 2008 R2 64bit on FastTrak TX4660 RAID volume

    - by Konstantin Boyandin
    I am trying to install Windows Server 2008 R2 Enterprise 64bit on the following hardware: motherboard Intel DBS1200BTL Promise FastTrak TX4660 RAID controller 4 disks set up in two RAID1 arrays (handled by FastTrak) I am trying to install Windows so it would boot from RAID1 volume created with the FastTrak controller. The installation goes as in the manual, I insert the disk with the driver, select 'Browse' and specify the correct driver, it finds all the RAID arrays but notifies me that error 0x80300001 happened, Windows can't be installed on the mentioned RAID volumes, since they may not be bootable (even though the target RAID volume is the first in boot options list). If I proceed with the installation, Windows copies and unpacks itself, performs other standard actions after that. After the computer is restarted, it won't boot (Windows Boot Manager appears in the boot devices list; however, neither it nor the RAID volume itself does not boot). Is it a known problem? I can attach the boot disks to the motherboard and use its RAID capabilities instead, but I'd prefer FastTrak ones. Driver version is 1.3.0.4. Thanks.

    Read the article

  • Ubuntu software stack to mimic Active Directory auth

    - by WickedGrey
    I'm going to have an Ubuntu 11.10 box in a customer's data center running a custom webapp. The customer will not have ssh access to the box, but will need authentication and authorization to access the webapp. The customer needs to have the option of either pointing the webapp at something that we've installed locally on the machine, or to use an Active Directory server that they have. I plan on using a standard "users belong to groups; groups have sets of permissions; the webapp requires certain permissions to respond" auth setup. What software stack can I install locally that will allow an easy switch to and from an Active Directory server, while keeping the configuration as simple as possible (both for me and the end customer)? I would like to use as much off-the-shelf software for this as possible; I do not want to be in the business of keeping user passwords secure. I could see handling the user/group/permission relationships myself if there is not a good out-of-the-box solution (but that seems highly unlikely). I will accept answers in the form of links to "here is what you need" pages, but not "here is what Kerberos does" unless that page also tells me if it's required for my use case (essentially, I know that AD can speak Kerberos, but I can't tell if I need it to, or if I can just use LDAP, or...).

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • lighttpd: Backend is overloaded + fcgi-server re-enabled + all handlers are down

    - by AbuZubair
    We have a standard lighttpd deployment with PHP-CGI and our error logs are flooding with the following. This is causing a huge problem because we keep returning 500's to our clients: 2012-10-14 14:28:38: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 36 2012-10-14 14:28:38: (mod_fastcgi.c.2764) fcgi-server re-enabled: 0 /tmp/php-7735.socket 2012-10-14 14:28:39: (mod_fastcgi.c.2764) fcgi-server re-enabled: 0 /tmp/php-7735.socket 2012-10-14 14:28:40: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 37 2012-10-14 14:28:40: (mod_fastcgi.c.2764) fcgi-server re-enabled: 0 /tmp/php-7735.socket 2012-10-14 14:28:41: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 57 2012-10-14 14:28:41: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 57 2012-10-14 14:28:42: (mod_fastcgi.c.3597) all handlers for /index.php? on .php are down. Does anyone have any clue as to what is going on? We restarted all php and lighttpd related processes and that didn't fix the problem. We ended up rebooting the whole box and now its gone away, although we fear it may come back later.... In general our deployment has been doing fine for a long time and this is the first time this has happened.

    Read the article

  • How to auto-mount encfs volume on login in ubuntu 9.10

    - by xzenox
    Hi, Previously, in 9.04, I was using pam mount in conjunction with encfs to mount an encrypted volume at login. This worked perfectly and since the password was the same as my user password, none was entered besides the initial login one. Now in 9.10, using the same setup and the same volume line in pam's config file, the volume will not mount. The folder does not even get created for the mount point. I am thinking this might be caused by the fact that I now switched to using an encrypted /home directory (previously left unencrypted on 9.04). To encrypt it, i used the standard /home encrypt setup from the 9.10 fresh install. I am thinking that perhaps, pam tries to mount the volume before /home is mounted and fails. Is there a log file I could look into/post here? Note that mounting manually works fine using the same paths, writing full paths does not help, nor is removing the options attribute. Here's my volume entry: <volume user="nicholas" fstype="fuse" path="encfs#~/.dropbox_dir/Dropbox/encrypted" mountpoint="~/Dropbox" options="nonempty" />

    Read the article

  • Simple image editor to select area of image as wallpaper

    - by Kevin
    I've spent way to many hours looking for software to do the following simple task, so now I'll ask. I need software that will open an image and put a 'crop box' on it. You can set the 'crop box' to standard screen resolutions (1024x768) or define a custom one. You can move the 'crop box' around on the image to select the area you want. You can re-size the 'crop box' (selecting a corner and dragging w/mouse) and it maintains the correct aspect ratio. You can save the area in the 'crop box' to use as the Windows background. (The software doesn't need to set it as the background, I can do that myself in Windows XP.) The free software sites (CNET.com, etc.) have lots of image editing software that do things much more complicated than this simple task. I've spent too many hours downloading them to see if they will do this particular task. The ones I've tried would require manual trial and error to get the part of an image that I want saved as an image, with the correct aspect ratio so Windows doesn't screw with it (stretch, tile, crop) when I select it as the wallpaper.

    Read the article

  • Hide account from login screen but can be used in UAC

    - by tvanover
    So I have a Windows 7 home machine with 2 user accounts. One is a standard user account and one is an administrator account. Now this is going to be put in the hands of a very low-tech user so I don't want them to be able to see the administrator account on logon, but they want to have a password to prevent someone else from using the machine. My goal is that when the user turns on the computer, they are presented with their login. After logging in to their non-administrator account, if something needs to be installed then the administrator account can be used through UAC. I have tried creating the reg key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList and adding a dword of the account name and set it to 0. It succeeded in hiding the account from th login screen. As well as hiding it from UAC. So it fails the second requirement, of being able to run things as administrator through UAC. Also since I didn't set an administrator password (left it blank) it seems that I have completely locked myself out of the machine since runas doesn't accept blank passwords. So I also cannot undo it, and have quite effectively bricked the install, prompting an OS reinstall. This is Windows 7 Home, so there is no Users management console.

    Read the article

  • Apache not Forwarding Client x509 Certificate to Tomcat via mod_proxy

    - by hooknc
    Hi Everyone, I am having difficulties getting a client x509 certificate to be forwarded to Tomcat from Apache using mod_proxy. From observations and reading a few logs it does seem as though the client x509 certificate is being accepted by Apache. But, when Apache makes an SSL request to Tomcat (which has clientAuth="want"), it doesn't look like the client x509 certificate is passed during the ssl handshake. Is there a reasonable way to see what Apache is doing with the client x509 certificate during its handshake with Tomcat? Here is the environment I'm working with: Apache/2.2.3 Tomcat/6.0.29 Java/6.0_23 OpenSSL 0.9.8e Here is my Apache VirtualHost SSL config: <VirtualHost xxx.xxx.xxx.xxx:443> ServerName xxx ServerAlias xxx SSLEngine On SSLProxyEngine on ProxyRequests Off ProxyPreserveHost On ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /usr/local/certificates/xxx.crt SSLCertificateKeyFile /usr/local/certificates/xxx.key SSLCertificateChainFile /usr/local/certificates/xxx.crt SSLVerifyClient optional_no_ca SSLOptions +ExportCertData CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / https://xxx.xxx.xxx.xxx:8443/ ProxyPassReverse / https://xxx.xxx.xxx.xxx:8443/ </VirtualHost> Then here is my Tomcat SSL Connector: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" address="xxx.xxx.xxx.xxx" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/local/certificates/xxx.jks" keypass="xxx_pwd" clientAuth="want" sslProtocol="TLSv1" proxyName="xxx.xxx.xxx.xxx" proxyPort="443" /> Could there possibly be issues with SSL Renegotiation? Could there be problems with the Truststore in our Tomcat instance? (We are using a non-standard Truststore that has partner organization CAs.) Is there better logging for what is happening internally with Apache for SSL? Like what is happening to the client cert or why it isn't forwarding the certificate when tomcats asks for one? Any reasonable assistance would be greatly appreciated. Thank you for your time.

    Read the article

  • Access Control Lists in Debian Lenny

    - by arbales
    So, for my clients to who have sites hosted on my server, I create user accounts, with standard home folders inside /home. I setup an SSH jail for all the collective users, because I really am against using a separate FTP server. Then, I installed ACL and added acl to my /etc/fstab — all good. I cd into /home and chmod 700 ./*. At this point users cannot see into other users home directories (yay), but apache can't see them either (boo) . I ran setfacl u:www-data:rx ./*. I also tried individual directories. Now apache can see the sites again, but so can all the users. ACL changed the permissions of the home folders to 750. How do I setup ACL's so that Apache can see the sites hosted in user's home folders AND 2. Users can't see outside their home and into others' files. Edit: more details: Output after chmod -R 700 ./* sh-3.2# chmod 700 ./* sh-3.2# ls -l total 72 drwx------+ 24 austin austin 4096 Jul 31 06:13 austin drwx------+ 8 jeremy collective 4096 Aug 3 03:22 jeremy drwx------+ 12 josh collective 4096 Jul 26 02:40 josh drwx------+ 8 joyce collective 4096 Jun 30 06:32 joyce (Not accessible to others users OR apache) setfacl -m u:www-data:rx jeremy (Now accessible to members apache and collective — why collective, too?) sh-3.2# getfacl jeremy # file: jeremy # owner: jeremy # group: collective user::rwx user:www-data:r-x group::r-x mask::r-x other::--- Solution Ultimately what I did was: chmod 755 * setfacl -R -m g::--- * setfacl -R -m u:www-data:rx *

    Read the article

  • How can I use Linksys WPSM54G print server as a bridge for another machine AND also share the printe

    - by user26453
    I have a Linksys WPMS54G currently sharing a printer via the USB port with the rest of my network via the wireless. Is there any way to set it up so that the ethernet port is bridged over the wireless adapter portion? i.e., be able to uplink another machine or switch into the network via the WPMS54G's ethernet port? Update: The network architecture is as follows: (1) Linksys WRT54G router that serves as a router, DHCP server, and wireless access point for the network. Fairly standard configuration (3) Laptops that are used throughout the house via wifi (1) Linksys WPSM54G printer server that connects via wireless to the router, in a separate room with a printer attached to print seerver's USB port along with (1) Un-networked desktop in the same room Since the printer is plugged into the USB port of the WPSM54G, I am wondering if I can connect the desktop to the ethernet port of the WPSM54G and have it bridged over the wifi to the router. The twist here is that the ethernet is initially used to connect the wireless print server to the router (for configuration, can't configure it wirelessly if you are initially on a encrypted network). Now instead of using that ethernet port as a way to connect the print server to the network (via the router), I want to use the ethernet port as a way to connect another computer to the network, in effect bridging into the router via the print server, while still sharing the printer (attached via USB) through the print server. If this is not clear, please comment. To be clear, the computer I want to connect/bridge into the network does not have a wireless card, is far from the router, and I do not want to lay ethernet cable to connect it. While I could certainly buy a legitimate wireless bridge to accomplish this, I figured since the print server already has an ethernet port, see if I can't use that.

    Read the article

  • SAN Replication for Fault tolerance using EVA4400

    - by Sergei
    Hi Everyone, I hope that someone would point me in the correct direction - it looks like I have no enough konwledge in the subject and timeframes are too tight for me to explore different scenarios in depth.. We have two datacenters few miles away from each other connected by 100 Mbps link.Each datacenter will have 5 BL490 blades with ESX Standard hosting about 50 VMs. Eac hsite has HP eva4400 SAN with SAN replication set up.VC is going to be in the first datacenter and both datacenter are networked. SAN Replication is block level so it seems like I cannot just replicate changes but all writes would have to be replicated.This should not be a problem as link can sustain about 1.8 TB a dayand data can be buffered. I am having trouble however visioning how recovery would work in this case.We don't need instant recovery , I would say 4 hours recovery time is accepted so fancy automatic SRM like DR scenario would not be easily accepted due to the financial reasons, however any comments are welcomed. Current idea is following: replicate LUNs from primary site to the secondary.When disaster strikes, IT personnel switches on ESX hosts on the remote side and connects replicated LUNS to them, then registers VMs and changes IP address. I understand that this seems like horribly manual process and I almost sure I have missed some obvious pitfalls here. Could someone let me know what direction should I go?An articles regarding the subject? This is a brand new setup and we would rather build up basic recovery process and scale it later.I just need to have a right direction to allow for such scalability. Thank you very much in advance!

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    Hi, I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server Cannot connect to MSSQLSERVER. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 BUTTONS: OK I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • Backup program for Windows using non-proprietary format?

    - by Cristi Diaconescu
    I'm looking at the various local backup programs for windows, and I was wondering which of them use a non-proprietary backup format? By non-proprietary, I mean I want to be able to access at least the latest version of the backed up files either directly, or by using an open-standard format like zip/7z/rdiff... The other thing I'm looking for in a backup program is the ability to create incremental backups. What I have found so far: SyncBack copies files as-is, using separate directories for versioning pretty much the same for all the 'roll you own' task scheduler + rsync/xcopy32/robocopy/MS SyncToy/etc solutions GFI Backup appears to be using Zip files, at least in their 'Business' version, not sure about the free 'Home' version. Didn't try it yet, but it's next on my list. Mozy (!) supports local backup starting with v 2.0 and basically provides a 2nd local copy on a separate partition. Subjectively, it feels slow and resource intensive (I think it took more than a week to finish the first local backup of ~ 300 GB), and does not appear to offer file versioning (arguably, you can get older file versions online). On the positive side, it looks like the local backup is integrated in the restore process which was traditionally a masochistic experience (and this goes for any online backup provider). Other suggestions? I favor ease of use over tons of options (e.g. SyncBack is very flexible but it offers sooo many ways to shoot yourself in the foot...)

    Read the article

  • Best Practice: Apache File Upload

    - by matnagel
    I am looking for a soultion for trusted users to upload pdf files via html forms (with maybe php involved). This is quite a standard ubuntu linux server with apache 2.x and php 5. I am wonderiung what are the benefits of the apache file upload module. There were no updates for some time, is it actively maintained? What are the advantages over traditional php upload with apache 2 without this module? http://commons.apache.org/fileupload I remember traditional php file upload is difficult with some pitfalls, will the apache file upload module improve the situation? The solution I am looking for will be part of an existing website and be integrated into the admin web frontend. Things I am not considering are webdav, ssh, ftp, ftps, ftp over ssh. Should work with a browser and without installing special client software, so I am asking about a browser based upload without special client side requirements. I can request a modern browser like firefox = 3.5 or modern webkit broser like chrome or safari from the users.

    Read the article

  • weblogic plug-in apache http server location directive question

    - by user39510
    We are using Weblogic Portal and Apache 2.x http server with the weblogic plug-in for apache for load-balancing. We have an application that right now can only be accessed from one of our managed servers. What I would like to do is use the Location directive to direct all requests for that page to the one managed server and I can't get it to work. The context that the portal tries to forward to is something like /MyWebApp?portalusername= (where equals a legitimate user. For example /MyWebApp?portalusername=joesmith. All other applications and the plug-in is load balancing as expected because every now and then you'll get sent to the second managed server for this particular application and its not deployed. I tried various things in the Apache http.conf like the following but can't seem to get it work. Any suggestions? The following is a snippet of the httpd.conf. Its a standard out of the box httpd.conf file with the weblogic plugin configuration. <Location /MyWebApp> SetHandler weblogic-handler WebLogicCluster myserver:7011 </Location> <Location /?> SetHandler weblogic-handler WebLogicCluster myserver:7011, myserver2:7012 </Location>

    Read the article

  • signed applet automatically running as insecure

    - by Terje Dahl
    My application is deployed as a self-signed applet to several thousand users at more than 50 schools across the country (in Norway). The user is presented with the standard Java security warning asking if they will accept the signature. When they do, the applet runs perfectly. However, about half a year ago a group of 7 school, all under a common IT department, stopped getting the security warning. In stead the applet loads and starts running in untrusted mode, without first giving the user an option to accept or reject the signature. The problem is on Windows machines, and only when the machine is connected to the schools network. If they take the same machine home with them, the program functions as it should, with security warnings and everything. I know little about Window systems in general, but I would think it would be some sort of policy-file or something that is loaded when a machine hooks up to/through the schools network. Furthermore, the problem only started occurring in these 7 schools after changes made after a security breach they had a while back. The IT department is stumped. I am stumped. Any thoughts, comments, suggestions?

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • Accidently overwrote system.dbf - What now?

    - by Filip Ekberg
    I accidentally overwrote system.dbf in /usr/lib/oracle/xe/oradata/XE/system.dbf Well I did not actually do it accidentally, however I overwrote it because of other failures in the database. And when I try running the following: SQL> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1258488 bytes Variable Size 92277768 bytes Database Buffers 192937984 bytes Redo Buffers 2932736 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open Now I want to try to Recover the database because starting it in mounted or standard surely doesn't work. SQL> recover database using backup controlfile; ORA-00283: recovery session canceled due to errors ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01122: database file 1 failed verification check ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01206: file is not part of this database - wrong database id How do I solve this? Is it even possible? My "real" problem was that I ran the /etc/init.d/oracle-xe configure and it overwrote my old configuration and probably removed passwords and such so my tables were gone, however I found the mytablespace.dbf so I hope that it is possible to recover? Please shed some light on this.

    Read the article

  • So Close: How to get this SSH login working (.bashrc)

    - by This_Is_Fun
    Objective: SSH login ( + eliminate warning message) / run 2 commands / stay logged in: EDIT: Oops, I made a mistake (see below): This code does ~95% of what I wanted to do # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] 2> /dev/null "cd /var; ls; /bin/bash -i"' Now, after successful login / verify user logged in = root pts/0 2011-01-30 22:09 Try to 'logout' = bash: logout: not login shell: use `exit' I seem to have full root access w/o being logged into the shell? (The " /bin/bash -i " was added to 'Stay logged in' but doesn't work quite as expected) FYI: The question is "How to get this SSH login working" & it is mostly solved, sorry I made a mess... ... .. . Original Question Here: # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] "cd /var; ls; /bin/bash -i"' # (hack) Hide "map back to the address - POSSIBLE BREAK-IN ATTEMPT!" message. alias gr='ssh -p 5xx4x [email protected] 2> /dev/null' Both examples 'work' as shown; When I try to add the ' 2 /dev/null ' to the first example, then the whole thing breaks. I'm out of time trying to solve the warning message other ways, so is it possible to combine both examples to make example #1 work w/o the warning message? Thank you. ps. If you also know a proper way to kill the login warning message, please do tell (the 'standard' "edit host file" advice isn't working for me)

    Read the article

  • SQL Server 2008 R2 transactional replication over VPN

    - by enashnash
    I'm having difficulty setting up replication over a VPN. I have a SQL Server 2008 R2, Enterprise Edition database on a Windows 2008 R2 Server. SQL Server is running on a non-standard port. I have set it up so that it is acting as its own distributor and have configured a publisher on this server. It is set as an updatable transational publication (yes, this is necessary). On this server, I have Routing and Remote Access enabled in order to be able to establish VPN connections. It is configured with a static IP address pool, of which the first in the range is always assigned to the server. I have assigned a test user a static address within this range (I don't know if this is necessary or not). All clients will be 2008 R2 versions, but could be SQL Express or standalone developer instances of the full product. I can establish a VPN connection from the client without problems and can see that the correct IP addresses are allocated. After connecting to the database to test that I can establish a connection, I realised that I needed to be able to connect to the database using the server name rather than an IP address - required for replication - which wouldn't work initially. I created an entry in the hosts file for the server on the client using the NETBIOS name of the server, and now I can connect to the server, from the client, using the SERVER\INSTANCE, PORT syntax, over the VPN. As it is the default instance on the server, I can also connect with simply SERVER, PORT syntax. After all that, I still get the following dreaded error: SQL Server replication requires the actual server name to make a connection to the server. Connections through a server alias, IP address, or any other alternate name are not supported. Specify the actual server name, 'SERVER\INSTANCE'. (Replication.Utilities). What have I missed? How do I get this to work? TIA

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by plua
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • Server 2008 Disk Management Hangs

    - by Payson Welch
    So I have looked everywhere for the solution to this and have tried many things. There is one post on SE related to this and I tried the suggested answer but I am still having problems. We have a server running Server 2008 R2 Standard x64. I need to increase the space of C: since the free space is running very low. However when I open Server Manager and try to go to the "Disk Management" snap-in it just hangs. There is a status message on the bottom of the window that says "Connecting to Virtual Disk Service...". Here are the steps I have taken: Ran sfc /scannow Set all of the drives to be dirty and rebooted so that they would be scanned Executed chkdsk /f /r /b /v on all of the drives. Checked for Windows updates (none). Verified that the services "Virtual Disk", "RPC Procedures" and "Plug and Play" are all running. One symptom is that the service "Virtual Disk" does not cleanly shut down. I receive a message about the process being unexpectedly terminated when I try to stop or restart it. Also I cannot find anything relevant in the event logs. Any ideas or suggestions?

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >