Search Results

Search found 60030 results on 2402 pages for 'mark yvanovich@oracle com'.

Page 338/2402 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Active DFS node did not restore after failure

    - by Mark Henderson
    On Tuesday we had a Server 2008 R2 DFS-R node go offline unexpectedly. DFS did the right thing and started routing requests to a different node, which was in a remote site. This is by design, because even though it's slow, at least it's still working. We had the local DFS-R node back online within an hour, and it had synced all its changes 10 minutes after that. 3 of the 5 terminal servers reset themselves to the local DFS node, but the other two stayed pointing at the remote DFS node for three days, until someone finally piped up about how slow requests were. What reasons could there be why some, but not all, of the server reverted? Is the currently active DFS node for a namespace exposed anywhere in the OS (WMI, or even scripts) so that we can monitor the active nodes?

    Read the article

  • Only 192.168.0.3 can request most files, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • How to add exception for backup MX to tumgreyspf?

    - by Waleed Hamra
    I have an Ubuntu raring server running postfix/dovecot as an email server, with tumgreyspf doing greylisting and SPF checks. My problem is that I also have a backup MX server, that is supposed to store my emails temporarily, should my main server ever fails. It usually rejects receiving emails if it finds the main server online and functional. The problem is when it does need to do its job, tumgreyspf rejects all emails from the backup MX with an error like this: Jun 27 16:18:13 hamra postfix/smtpd[28732]: NOQUEUE: reject: RCPT from mxbackup.mydomain.com[x.x.x.x]: 550 5.7.1 <[email protected]>: Recipient address rejected: QUEUE_ID="" SPF Reports: 'SPF fail - not authorized'; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<mxbackup.mydomain.com> any ideas?

    Read the article

  • Gmail won't forward mail sent to myself.

    - by BHare
    I own a dedicated server with a domain, we'll say foobar.com. I use google apps to manage my email SMTP servers. Now I don't check two gmail inboxes. I have my own personal one, and then I have foobar.com's inbox from google apps. Naturally the easiest thing to do is just have all foobar's emails forwarded to my personal one. So then I am only checking 1 inbox. This is all fine and dandy. I use MSMTP that with a wrapper that uses /etc/aliases. I have it set so any mail attempting to go to root (Things from cron, etc) will go to support@foobar.com. So when google app's (foobar.com) gets an email from the email I have setup with it ([email protected]), it automatically doesn't forward the message. This is a "feature" to gmail/google apps I suppose. How do I get around it? workarounds? etc. I could just have my alias set to my personal email but I wanted a place to have all foobar related emails archived in one place (googleapps).

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Windows Server 2003 - passwordless access to \\myhost\ but not \\myhost.mydomain.net\

    - by Charles Duffy
    I have a Windows Server 2003 system on which passwordless access to local UNC paths is possible using the server's unqualified hostname or its IP address, but not via its FQDN -- even when the hosts file is used to map that FQDN directly to 127.0.0.1. That is: \\127.0.0.1\ - passwordless \\myhost\ - passwordless \\myhost.mydomain.com\ - brings up an authentication dialog Unfortunately, I have a local application trying to resolve UNC paths including the host's FQDN. I've tried resolving myhost.mydomain.com to 127.0.0.1 in both hosts and lmhosts, and calling ping myhost.mydomain.com at the command prompt gives the appearance that this resolution has taken effect; even so, attempting to open \\myhost.mydomain.com\ from Windows Explorer brings up a password prompt, while \\127.0.0.1\ does not. The system is using an OpenDirectory server (Apple's Kerberos+LDAP directory service) for authentication.

    Read the article

  • Computer sponteously reboots when doing heavy file copy to/from disk

    - by Mark Hosang
    I've been fighting with this problem for the last 3 weeks where my machine will just instantly reboot. No BSOD, and when i checked the event log all that was reported was the generic "Kernal-power" error with the detailed information pointing to a hard crash. This is a machine that was working for 18 months before these crashes started happening. When they started happening is after I added 3 HDs in a RAID-5, upped the memory to 12gb, moved to a new house, added a SSD and added about 5 case fans. I have thus eliminated the RAID, and determined that the SSD was not the cause (because it was still crashing even though the ssd wasn't connected). I've run memtest several times over night with no memory problems showing up. I've run IntelBurnTest to max out the cpu to see if it was a heat issue and at full tilt after 20 min it was only at 85C and the machine didn't crash. I also took a look at the voltages during this test, with a screenshot at the bottom of this post I've ruled out a software issue by reinstalling windows 7 ultimate x64 a total of 5 times, but even during that the install it crashes. Happens sometime during file copying at the beginning, or during uncompressing files, or sometimes during running windows update. The only discernible pattern i can see is that it seems to crash when hard disks might be spinning up or when they are accessed heavily from large file transfers. My current guess is that it is probably an issue with the MB, PSU or the power coming through the outlet. Any suggestions of what i could try to troubleshoot or what may be wrong? Specs PSU: Seasonic M12 700w Mem: 12gb CPU: i7-920 with stock heatsink MB: Asus P6T HDs: 3 green WD and 1 Corsair force 3 120b with 1.3.3 firmware Running full tilt voltages Idling Voltages

    Read the article

  • Get Jira to run on shared Windows server on port 80

    - by codeulike
    I know this can be done on Linux with JIRA, using mod_proxy, but I'm not sure if its possible on Windows: Say we have a Windows server running IIS 7.0 and serving up pages on port 80, via an address like: http://twiddle.something.com We then install JIRA on the server, it uses its bundles Apache web server to serve stuff up on port 8080, like this: http://twiddle.something.com:8080 Is there a way to configure IIS and Apache so that JIRA runs off a port 80 folder, as in: http://twiddle.something.com still hits IIS http://twiddle.something.com/Jira hits JIRA on Apache? Thanks edit: I guess we might also want to throw SSL into the mix for JIRA too....

    Read the article

  • How do you determine the OWNER of an Oracle Database

    - by Kwang Mark Eleven
    When you install an Oracle database in a Unix server, the Unix user id you use for the installation becomes the OWNER of the database. What is the most reliable and general way of determining in a shell script which Unix user is the owner of an Oracle installation? I mean, can you perform a grep on a file created by the installation to find this information or shall I resort to use the ls command on a specific file on a specific directory. If the name of the file to be checked is also variable, I would need to have a way of determining the name and path to the file. Thanks in advance for your time

    Read the article

  • Windows 7 PC browsers having trouble to open certain sites but not others

    - by user55345
    Hi, My Win7 PC these days acting very weird: Examples: It can not open bing.com, alexa.com, msdn.microsoft.com (blank page or page can not be found); It can open zdnet.com but lost all CSS layouts and pictures; It can open stackoverflow and Google no problem; I tried all three browsers (IE8, Chrome, Firefox 3.6) same thing. I didn't change anything on this PC in days. This PC does have Anti-Virus installed and updated. I'm also pretty sure it's not a network problem, because my other laptops are just fine at the same time. The only thing I can think of is it might be some auto-update stuff underneath happened without my knowledge, but I have no idea where to troubleshoot? Please shed some lights for me? Thanks a lot.

    Read the article

  • WordPress - Open Link in New Windows Default

    - by ninjaboi21
    Hey SuperUsers, is it possible to make some sort of change or add a plugin that allows me to make every 'external' link open in a new window? Just an example: if my blog was called http//timmy.com/ and I wanted to link to http//tom.com/, it would open a new window instead of the same window, but I wanted to link from http//timmy.com/ to http//timmy.com/somewhere/ in the same window. So I want all external link from my website to another in a new window/tab, but I want every internal link, from my website to somewhere else on my website, to be in the same window/tab. Is this possible?

    Read the article

  • Creating 2 IIS asp.net applications and making 1 the root.

    - by Shawn Mclean
    I'm using godaddy. I have 2 applications I want to install on the server. One named en and the other fr (english and french). On the en application, I checked Make Application Root. Now I assumed I could go to www.mysite.com and it automatically loads www.mysite.com/en/. I have to go to the directory manually. How do I fix this? My file structure is: www.mysite.com/en for the en app and www.mysite.com/fr for the fr app.

    Read the article

  • SonicWall HA "gotchas"?

    - by Mark Henderson
    We're looking to move away from PFSense and CARP to a pair of SonicWall NSA 24001 configured in Active/Passive for High Availability. I've never dealt with SonicWall before, so is there anything I should know that their sales guy won't tell me? I'm aware that they had an issue with a lot of their devices shutting down connectivity because of a licensing fault, and they have an overtly complex management GUI (on the older devices at least), but are there any other big "gotchas" that I need to be aware of before committing a not insubstantial amount of money towards these devices? 1If you're outside the US, the SonicWall global sites suck balls. Use the US site for all your product research, and then use your local site when you're after local information.

    Read the article

  • NTBackup (on WS2k3) fails to backup remote server (WS2k8R2) with " Error: is not a valid drive, or you do not have access."

    - by Mark A
    We run an NTBackup job on a Windows Server 2003 R2 SP2 with all updates (as of Q4-2011). It works well backing up two WS2k3 servers as well as the backup server itself. However, we have been unable to successfully back up our Windows Server 2008 R2 machine ("G5-01"). It often runs for about 2GB worth of backup and then dies out with one of the below error messages. It should be more like 20GB for the full server. We have tried using the admin share (C$), an explicitly shared drive share, UNC and mapped drives. The result is the same each time, the only thing that varies is the amount of stuff backed up before it chokes. We've also run NTBbackup from the UI interface, from the command line and as a scheduled task. We are backing up to 400/800GB tapes and they have plenty of space available on them (blank media). Error: \\G5-01\c is not a valid drive, or you do not have access. Error: \\G5-01\c$ is not a valid drive, or you do not have access. Error: Y: is not a valid drive, or you do not have access. Error: Could not access or create backup catalog files. Verify that you have full access to the working folder and there is disk space available. The job is run as Administrator and we have no problems logging onto the server and transferring files. The Event Log on the WS2k8 is not much help, as it has success audits for each login. All of the hardware involved (HP DL360 G3, HP LTO Ultrium 3, Adaptec 39320A) has the latest supported drivers. We've seemingly tried a bunch of different options but are wondering where to look next to resolve the backup issue. We've been super happy with our reliable schedule task for years but this one is stumping us!

    Read the article

  • SSL connection hangs as client hello (curl, openssl client, apt-get, wget, everything)

    - by Niklas B
    Hi, I've run into a problem on my Debian VPS (a xen domU) regarding SSL. Namely almost all SSL connections hangs at client hello. For example: # curl -vI https://graph.facebook.com About to connect() to graph.facebook.com port 443 (#0) Trying 66.220.146.48... connected Connected to graph.facebook.com (66.220.146.48) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs SSLv3, TLS handshake, Client hello (1): It's the same when using the openssl client. However, some of the SSL traffic works (for example https://www.nordea.se). Server #uname -a Linux server.com 2.6.26-1-xen-amd64 #1 SMP Fri Mar 13 21:39:38 UTC 2009 x86_64 GNU/Linux It does however work on my Dom 0 (the main xen host). Apt-get I can't even run apt-get update with the debian security sources (hangs on reading headers) Open SSL At the begining I thought I had an old openssl client (0.9.8o-4) since I appeared to have a newer on the Dom 0 (0.9.8g-15+lenny8) but doing a manuanl update on the openssl deb didn't help. Open SSL Client This is the full output of when the openssl client hangs: http://pastebin.com/PAjwMap9 Closing thoughts I've Googled the crap out of this, and I'm not getting any further. I've seen problems with curl, apt-get etc. but they are all specific relating to the very application - not general for the system. Any thoughts?

    Read the article

  • Apache Redirect from https to https

    - by Nikolaos Kakouros
    I am trying to redirect without a rewrite rule from eg https://www.domain.com to https://www.domain.net . I have a wildcard certificate for *.domain.net . This yields the following warning in my error_log [warn] RSA server certificate wildcard CommonName (CN) `*.domain.net' does NOT match server name!? This makes sense and I understand why the warning. I would like to ask if there is a way to use the Redirect directive to accomplish the above without the warnings. Here is my virtual hosts in ssl.conf: <VirtualHost *:443> SSLEngine on ServerName www.domain.net DocumentRoot /var/www/html/domain SSLOptions -FakeBasicAuth -ExportCertData +StrictRequire +OptRenegotiate -StdEnvVars SSLStrictSNIVHostCheck off </VirtualHost> <VirtualHost *:443> SSLEngine on ServerName www.domain.com ServerAlias www.domain.info Redirect permanent / https://www.domain.net </VirtualHost> Also, if there is a solution, can it be used for redirection from htps://domain.com to htps://www.domain.com? Thanks a lot!

    Read the article

  • Losing Windows Authentication intermittently

    - by Mark Robinson
    I'm running a website on IIS6 / Server 2003 which uses Integrated Windows Authentication on a local intranet. I can browse to the site but get intermittent "Object null" errors when calling the following C# code which is called on every request: .... GetUserIdFromPrincipal(User) .... public static string GetUserIdFromPrincipal(IPrincipal principal) { return principal.Identity is WindowsIdentity ? (principal.Identity as WindowsIdentity).User.Value : principal.Identity.Name; } (Apologies for the code sample but was torn between SF and SO but think this is more to do with server config). So as the error is intermittent clearly Windows Auth is working on some level but after navigating around the site for several clicks I get the null reference error meaning IPrincipal is null (I thought this should never be null in ASP.NET). The error only happens on this newly built VM. The code is fine on other machines. Does IIS request the Windows Auth details on each request? What would cause such an intermittent problem? Any help or suggestions would be much appreciated.

    Read the article

  • Managing multiple independant domains with Google Apps

    - by Saif Bechan
    I am currently running a server where I have multiple domains with all of them running there own mail server. My plan is to outsource this whole email service and have Google, or competitor, do this for me. Let me start by telling you the setup I have now and want to migrate to Google. Initial setup I have a main domain where I run my server, and my nameserver. This is an important domain because this holds the connection with all my internal applications. For example log messages, cronjob messages, and virus-scan messages are sent to this domain. This email is also registered at my registrar and I use it to communicate with my ISP. Next I run a few independent websites that all need their independent email addresses. This can be on shared space, I don't mind. 1 Gig will be enough for everything I am going to do. Summary: superdomain.com (which only has a catchall for internal use and communication with my ISP) cars.com (independent) flowers.com (independent) foods.com (independent) I am going to be the admin for all of this. The independent domains don't need there own admin panel, they just need email addresses like info@ support@, etc. I do all the managing and they just send and receive emails using the accounts i give them. All of the websites have there different staff that use the accounts. Tried so far I have registered my superdomain, but I can only add aliases to the main domain. If I make all the other domains aliases the emails from info@cars.com and info@flowers.com will have the same inbox. I want them to be separate. is the only way to achieve this by creating an account for each domain? And if so, is there no way of creating a superdomain account where I can edit all these accounts easily without having to log in 4 different places to get my work done. I have searched the Google help forums, and posted questions but without any results so far. Questions Can anyone please give me some advice on what to do. I currently use the free program Google has.

    Read the article

  • Configuring htaccess to show authentication prompt only for subdomain

    - by Philipp Lenssen
    How do I write the htaccess so that it will only require authentication when on admin.example.com, but not on www.example.com (like by using some if-else clause)? Background: I have a site running in two modes: The admin mode should be reached at something like admin.example.com, whereas the normal mode would be www.example.com -- but both should point to the same directory & scripts within them (the scripts then turn on certain editing features by checking if the script is accessed from the admin subdomain). Edit: I can now see this has been asked and answered at StackOverflow, though I can't get the top answer to work for me...

    Read the article

  • setting up a sub domain on windows hosting

    - by jason
    I'm trying to set up a sub domain for development on a windows server and am having problems setting the correct details in the httpd.ini file and hoped someone could help. I have set up the subdomain http://dev.website.com The files that I want to use for this subdomain are on the server in a folder called development http://www.website.com/development in the directory structure they are in /htdocs/development What do I need to add the the httpd.ini file to point the http://dev.website.com to the files located in the /htdocs/development folder on the server?

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • DNS configuration for external/internal resolution

    - by FerranB
    HI, We have an internal web server which is available through Internet and from local network. The server is located in the local network. The current configuration is the following: To access through Internet you use http://webexample.com To access through Local Network you use http://myweb The main problem is that the local users cannot share links with external users. Thats a problem for us. I want to setup the following configuration: All users (local and Internet) access through http://webexample.com The local DNS server resolves http://webexample.com to the local Network IP (i.e 192.168.2.100) Any other suggestion? Which is the best way to override http://webexample.com resolution in Windows Server? It can be done on DNS server or it have to be done in hosts file?

    Read the article

  • Can a webite have too many bindings?

    - by justSteve
    IIS7.x on a win08 web version on a dedicated server. I have a site that's serving a few dozen affiliates - many of which are hitting me via a subdomain from their own root domain - all of which have a subdomain specific to their account. E.G. my affiliate named 'Acme' hits my site via: myApp.Acme.com (his root, my app) Acme.MyDomain.com (his account within my root domain) Currently I'm adding each of these as a binding entry in IIS (targeting a discrete IP, not '*'). As I ramp this up to include more affiliates I'm wondering if I should be concerned about how many binding this site handles. Proabaly, in Acme's case I can do without the 'Acme.MyDomain.com' because, in reality, all traffic takes place via myApp.Acme.com. Mine is a niche site - very volume compared to most. At what point do I worry about all those bindings? thx

    Read the article

  • Read NTFS partition on RHEL 5.8

    - by Alex Farber
    I have RHEL 5.8 64 bit, and NTFS partition on the same disk. How can I get access to this partition? This answer Unable to mount NTFS drive with RHEL 6 doesn't work for me: [root@localhost alex]# rpm -Uvh http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm Retrieving http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm error: skipping http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm - transfer failed - Unknown or unexpected error

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >