Search Results

Search found 59139 results on 2366 pages for 'david olivencia(at)oracle com'.

Page 347/2366 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • How to upgrade OS on Mac Mini with external USB Drive?

    - by David
    We have a G4 Mac Mini, circa 2005, running Mac OS X 10.4 Tiger, and we want to upgrade to 10.5 Leopard. We have a Leopard install disk, but the optical drive in this mini is broken. So we transferred the install disk image to a USB HDD, but now we can't figure out how to boot off it. From what I've read in Mac forums, some PPC Macs, including some G4's, have been able to boot from USB, even though it sounds like this wasn't officially supported, and it may well depend on the specific model of USB drive and Mac. My Mac says CPU is "PowerPC G4 (1.2)" and Boot ROM is "4.8.9f4". I was hoping I might just find somebody here who had that same Mac Mini and find out if they could make it work. I'd especially like to know any specifics about the USB drive they found success with. Any insights at all would be appreciated. Thanks.

    Read the article

  • My Reverse DNS PTR record seems to be right, but I'm still getting bouncing email

    - by johnbr
    Hello, I have a service (statusme.com) where I let people know (for example) that their kid's soccer games are cancelled because of bad weather. We send out emails to the people who have registered. I have a second server as a backup, (vps.statusme.com) and I've set up the application to send some of the email through the second server. But I'm getting complaints from various recipient SMTP servers that the email is considered spam. So I did some investigating, and it appears that they think my reverse DNS record isn't correct. But when I look at it via various rDNS websites and instructions I found elsewhere on ServerFault, everything looks correct: jb$ host -t a vps.statusme.com 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: vps.statusme.com has address 66.84.8.246 jb$ host -t ptr 246.8.84.66.in-addr.arpa 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: 246.8.84.66.in-addr.arpa domain name pointer vps.statusme.com. I'm confused about what I'm doing wrong. Thanks for any suggestions.

    Read the article

  • What's wrong with this vcl config for varnish-cache as load balancer?

    - by dabito
    I have the current configurations active on my default.vcl varnish file on the machine that balances the load for other two machines (the other two machines also have varnish active). My intention is to have this server do only the load balancing and the other machines do the processing and also their own caching. My problem is that even with the config testing (not even a stress test or anything, just a few requests a minute) I get the guru meditation error and have to restart varnish. This is the default.vcl for the load balancing server: backend vader { .host = "app1.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } backend malgus { .host = "app2.server.com"; .probe = { .url = "/"; .interval = 10s; .timeout = 4s; .window = 5; .threshold = 3; } } director dooku round-robin { { .backend = vader; } { .backend = malgus; } } sub vcl_recv { if (req.http.host ~ "^balancer.server.com$") { set req.backend = dooku; } } Am I doing something wrong or missing something? EDIT: This is varnishlog's output: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839995 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345839998 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840001 1.0 0 Backend_health - malgus Still sick 4--X--- 0 3 5 0.000000 3.846876 0 Backend_health - vader Still sick 4--X--- 0 3 5 0.000000 3.839194 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1345840004 1.0 14 SessionOpen c 10.150.7.151 38272 :80 14 ReqStart c 10.150.7.151 38272 458200540 14 RxRequest c GET 14 RxURL c / 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c / 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:44 GMT 14 TxHeader c X-Varnish: 458200540 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200540 1345840004.916415691 1345840004.965190172 0.020933390 0.048741817 0.000032663 14 SessionClose c error 14 StatSess c 10.150.7.151 38272 0 1 1 0 1 0 256 418 14 SessionOpen c 10.150.7.151 38273 :80 14 ReqStart c 10.150.7.151 38273 458200541 14 RxRequest c GET 14 RxURL c /favicon.ico 14 RxProtocol c HTTP/1.1 14 RxHeader c Host: dooku-dev.excelsior.com 14 RxHeader c Connection: keep-alive 14 RxHeader c Accept: */* 14 RxHeader c User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 14 RxHeader c Accept-Encoding: gzip,deflate,sdch 14 RxHeader c Accept-Language: en-US,en;q=0.8,es-419;q=0.6,es;q=0.4 14 RxHeader c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 14 RxHeader c Cookie: SESSa87d6c6da0c61037a9169122dc5e4a19=HR_0Srhgc-uDArT3aJFzOBy31FtzneTXg38byr1eGMU; __atuvc=4%7C33 14 VCL_call c recv pass 14 VCL_call c hash 14 Hash c /favicon.ico 14 Hash c dooku-dev.excelsior.com 14 VCL_return c hash 14 VCL_call c pass pass 14 FetchError c no backend connection 14 VCL_call c error deliver 14 VCL_call c deliver deliver 14 TxProtocol c HTTP/1.1 14 TxStatus c 503 14 TxResponse c Service Unavailable 14 TxHeader c Server: Varnish 14 TxHeader c Content-Type: text/html; charset=utf-8 14 TxHeader c Retry-After: 5 14 TxHeader c Content-Length: 418 14 TxHeader c Accept-Ranges: bytes 14 TxHeader c Date: Fri, 24 Aug 2012 20:26:45 GMT 14 TxHeader c X-Varnish: 458200541 14 TxHeader c Age: 0 14 TxHeader c Via: 1.1 varnish 14 TxHeader c Connection: close 14 Length c 418 14 ReqEnd c 458200541 1345840005.226389885 1345840005.226457834 0.000026941 0.000043154 0.000024796 14 SessionClose c error 14 StatSess c 10.150.7.151 38273 0 1 1 0 1 0 256 418

    Read the article

  • Can only access asp.net app on localhost

    - by Kevin Donn
    I'm trying to get an asp.net application up on IIS on a Windows Server 2008 machine. I can hit the app from localhost, no problem. But I can't access the app using the server's domain name either locally or from another machine on the network. But here's the odd part. I can access a normal file on IIS using the domain name, both from a browser running on the server and from a browser running on another machine on the network. Here's a synopsis ("http" converted to "htp" below because I don't have enough points to have all these links in my message): From IE on the server itself: works htp://localhost/foo.htm works htp://localhost/App works htp://test.foo.com/foo.htm dead htp://test.foo.com/App From IE on another machine (inside or outside my subnet): works htp://test.foo.com/foo.htm dead htp://test.foo.com/App And when I say "dead" I mean the request times out. Any ideas?

    Read the article

  • chef apt_repository fetching fails

    - by slik
    I am trying to fetch a specific repository to install a php version but I keep getting 404 NOT FOUND. chef recipe code: apt_repository "dotdeb-php54" do uri "http://archives.dotdeb.org" distribution "squeeze" components ["php5/5.4.8"] key "http://www.dotdeb.org/dotdeb.gpg" end Trying to fetch : http://archives.dotdeb.org/dists/squeeze/php5/5.4.8 But get the following error : Err http://archives.dotdeb.org squeeze/php5/5.4.8 amd64 Packages 404 Not Found Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/multiverse Translation-en Hit http://security.ubuntu.com precise-security/restricted Translation-en Err http://archives.dotdeb.org squeeze/php5/5.4.8 i386 Packages 404 Not Found Hit http://security.ubuntu.com precise-security/universe Translation-en Ign http://archives.dotdeb.org squeeze/php5/5.4.8 Translation-en STDERR: W: Failed to fetch http://archives.dotdeb.org/dists/squeeze/php5/5.4.8/binary-amd64/Packages 404 Not Found

    Read the article

  • 404 when doing safe-upgrade in lucid 64 box?

    - by Millisami
    Why I see 404 when doing sudo aptitude safe-upgrade in my lucid 64 box? deploy@li167-251:~$ sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-threaded-dev apache2-utils apache2.2-bin apache2.2-common apt apt-utils base-files binutils bzip2 dpkg dpkg-dev gzip ifupdown krb5-multidev language-pack-en language-pack-en-base language-selector-common libatk1.0-0 libatk1.0-dev libavahi-client3 libavahi-common-data libavahi-common3 libbz2-1.0 libc-bin libc-dev-bin libc6 libc6-dev libc6-i686 libcups2 libfreetype6 libfreetype6-dev libglib2.0-0 libglib2.0-dev libgssapi-krb5-2 libgssrpc4 libgtk2.0-0 libgtk2.0-common libgtk2.0-dev libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev libmysqlclient-dev libmysqlclient16 libnotify-dev libnotify1 libpam-modules libpam-runtime libpam0g libparted0debian1 libpng12-0 libpng12-dev libpq-dev libpq5 libssl-dev libssl0.9.8 libtiff4 libudev0 libusb-0.1-4 linux-libc-dev mountall mysql-client mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openssh-client openssh-server openssl parted python-apt sudo tzdata udev upstart ureadahead wget xulrunner-1.9.2 xulrunner-1.9.2-dev The following packages are RECOMMENDED but will NOT be installed: colibri debhelper fakeroot hicolor-icon-theme libatk1.0-data libglib2.0-data libgtk2.0-bin libhtml-template-perl manpages-dev notification-daemon notify-osd ssl-cert xauth xfce4-notifyd 88 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 85.8MB of archives. After unpacking 1712kB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Get:1 http://security.ubuntu.com/ubuntu/ lucid-updates/main libpam-modules 1.1.1-2ubuntu5 [358kB] Get:2 http://security.ubuntu.com/ubuntu/ lucid-updates/main base-files 5.0.0ubuntu20.10.04.2 [70.2kB] Get:3 http://security.ubuntu.com/ubuntu/ lucid-updates/main gzip 1.3.12-9ubuntu1.1 [102kB] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc-bin 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6-i686 2.11.1-0ubuntu7.2 .........

    Read the article

  • Should I expect ICMP transit traffic to show up when using debug ip packet with a mask on a Cisco IOS router?

    - by David Bullock
    So I am trying to trace an ICMP conversation between 192.168.100.230/32 an EZVPN interface (Virtual-Access 3) and 192.168.100.20 on BVI4. # sh ip access-lists 199 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 # sh debug Generic IP: IP packet debugging is on for access list 199 # sh ip route | incl 192.168.100 192.168.100.0/24 is variably subnetted, 2 subnets, 2 masks C 192.168.100.0/24 is directly connected, BVI4 S 192.168.100.230/32 [1/0] via x.x.x.x, Virtual-Access3 # sh log | inc Buff Buffer logging: level debugging, 2145 messages logged, xml disabled, Log Buffer (16384 bytes): OK, so from my EZVPN client with IP address 192.168.100.230, I ping 192.168.100.20. I know the packet reaches the router across the VPN tunnel, because: policy exists on zp vpn-to-in Zone-pair: vpn-to-in Service-policy inspect : acl-based-policy Class-map: desired-traffic (match-all) Match: access-group name my-acl Inspect Number of Half-open Sessions = 1 Half-open Sessions Session 84DB9D60 (192.168.100.230:8)=>(192.168.100.20:0) icmp SIS_OPENING Created 00:00:05, Last heard 00:00:00 ECHO request Bytes sent (initiator:responder) [64:0] Class-map: class-default (match-any) Match: any Drop 176 packets, 12961 bytes But I get no debug log, and the debugging ACL hasn't matched: # sh log | inc IP: # # sh ip access-lists 198 Extended IP access list 198 10 permit icmp 192.168.100.0 0.0.0.255 host 192.168.100.20 20 permit icmp host 192.168.100.20 192.168.100.0 0.0.0.255 Am I going crazy, or should I not expect to see this debug log? Thanks!

    Read the article

  • How to I alias a hostname?

    - by Jonas Byström
    Is it possible to keep a network alias - without specifying the IP address in the hosts file? For instance, I have abcd.efgh.com but want abcd -> abcd.efgh.com so that ping and ssh work as they normally would. I want it to work with dynamic IP on abcd.efgh.com, that's why I don't want to state the IP address explicitly.

    Read the article

  • Question about exim4 config syntax

    - by PeterMmm
    I'm trying to send a notification to the sender of a message when a message is send to exactly one address in the local domain ([email protected]). Q1: How would be the syntax for the condition (the above don't work) ? : notify_reply: driver=accept domains = +local_domains senders = ! ^.*-request@.*:\ ! ^bounce-.*@.*:\ ! ^.*-bounce@.*:\ ! ^owner-.*@.*:\ ! ^postmaster@.*:\ ! ^webmaster@.*:\ ! ^listmaster@.*:\ ! ^mailer-daemon@.*:\ ! ^root@.*:\ ! ^noreply@.* condition = ${if eq {$received_for}{[email protected]}} no_expn transport=notify_transport unseen no_verify Q2: How to write multiline string in the config file for "text" ? : notify_transport: driver=autoreply from=info@mydomain.com to=$sender_address subject=Your mail for text="Please resend your messasge to info@mydomain.com This is a temporary modification."

    Read the article

  • How do you configure ISC Bind to support GSS-TSIG Updates?

    - by netlinxman
    First, has anyone EVER configured ISC bind 9.5.0 OR greater with support for GSS-TSIG Dynamic DNS Updates AND gotten it to work? If so, what is the configuration that was used to make that happen? I feel close to having this working. I see that GSS cred passes w/o apparent error during the TKEY negotiation with an Active Directory DC and the BIND DNS server: client 192.168.0.30#52314: query gss cred: "DNS/dns1.example.com@EXAMPLE.COM", GSS_C_ACCEPT, 4294967256 gss-api source name (accept) is DC1$@EXAMPLE.COM process_gsstkey(): dns_tsigerror_noerror client 192.168.0.30#52314: send But, when the Update is sent, it is refused: client 192.168.0.30#58330: update client 192.168.0.30#58330: updating zone 'example.com/IN': update failed: rejected by secure update (REFUSED) client 192.168.0.30#58330: send Does anyone have this working in the real world?

    Read the article

  • Why is my DB read-only when attached to SQL Express, but not with SQL Web?

    - by David Rubin
    I have an .mdf/.ldf pair, originally created in 2008 R2 Standard, and well under 10GB, with ACLs: d:\db snapshot\DB_NAME.mdf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F d:\db snapshot\DB_NAME_log.ldf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F When I attach the database to an instance of SQL Express 2008 R2, it comes up as read-only. When exactly the same acls and user-accounts and SQLCMD statements are set up with SQL Web 2008 R2, it comes up writable. I looked at MSDN's comparison page but nothing jumped out at me. Why on earth is this happening? Thanks! UPDATE I just noticed that the name of the attached databases are different. On SQL Express (read-only) it matches the filename (e.g. DB_NAME); on SQL Web (writable) it matches the CUSTOM_NAME that I gave it in the attach command: CREATE DATABASE [CUSTOM_NAME] ON (FILENAME = 'PATH_TO_MDF'), (FILENAME = 'PATH_TO_LDF') FOR ATTACH

    Read the article

  • Unmanaged Network Switch vs Managed Network Switch

    - by David
    Currently I have an unmanaged POE switch connected to a Linksys router. I am thinking of upgrading my POE switch to a gigabit POE switch, the only problem is that the switch that I want to get is a managed switch. So here's my question: with a managed switch, can I still connect all of my devices to it and have the devices request IP addresses from the DHCP server within the Linksys router or will the devices request IPs from the managed switch since I believe the switch has its own DHCP server as well?

    Read the article

  • mod rewrite help

    - by Benny B
    Ok, I don't know regex very well so I used a generator to help me make a simple mod_rewrite that works. Here's my full URL https://www.huttonchase.com/prodDetails.php?id_prd=683 For testing to make sure I CAN use this, I used this: RewriteRule prodDetails/(.*)/$ /prodDetails.php?id_prd=$1 So I can use the URL http://www.huttonchase.com/prodDetails/683/ If you click it, it works but it completely messes up the relative paths. There are a few work-arounds but I want something a little different. https://www.huttonchase.com/prod_683_stainless-steel-flask I want it to see that 'prod' is going to tell it which rule it's matching, 683 is the product number that I'm looking up in the database, and I want it to just IGNORE the last part, it's there only for SEO and to make the link mean something to customers. I'm told that this should work, but it's not: RewriteRule ^prod_([^-]*)_([^-]*)$ /prodDetails.php?id_prd=$1 [L] Once I get the first one to work I'll write one for Categories: https://www.huttonchase.com/cat_11_drinkware And database driven text pages: https://www.huttonchase.com/page_44_terms-of-service BTW, I can flip around my use of dash and underscore if need be. Also, is it better to end the URLs with a slash or without? Thanks!

    Read the article

  • Is email forwarding to the sender's address usually blocked in Mail servers / MTA ?

    - by codecowboy
    I've noticed that email forwarding to an address seems not to work if I send an email from the address to which I am forwarding email. This happens for GMail and Fasthosts mail servers. e.g I send an email to info@mail.com from myaddress@mail.com , info@mail.com is set to forward to myaddress@mail.com and the email never arrives. I realise this seems logical but it is a potential cause of confusion when testing email functionality in a web application (for me, anyway ;-). I would just like to know if this is standard for all MTA software so I can avoid confusing myself.

    Read the article

  • Forms Authentication across Sub-Domains on local IIS

    - by Parminder
    I asked this question at SO http://stackoverflow.com/questions/8278015/forms-nauthentication-across-sub-domains-on-local-iis Now asking it here. I know a cookie can be shared across multiple subdomains using the setting <forms name=".ASPXAUTH" loginUrl="Login/" protection="Validation" timeout="120" path="/" domain=".mydomain.com"/> in Web.config. But how to replicate same thing on local machine. I am using windows 7 and IIS 7 on my laptop. So I have sites localhost.users/ for my actual site users.mysite.com localhost.host/ for host.mysite.com and similar.

    Read the article

  • Using IIS6 to run kill process. Executable hangs

    - by David
    I'm using the following code (any tried many variations) in a web page that is supposed to kill a process on the server: Process scriptProc = new Process(); SecureString password = new SecureString(); password.AppendChar('p'); password.AppendChar('s'); password.AppendChar('s'); password.AppendChar('w'); password.AppendChar('d'); scriptProc.StartInfo.UserName = "mylocaluser"; scriptProc.StartInfo.Password = password; scriptProc.StartInfo.FileName = @"C:\WINDOWS\System32\WScript.exe"; scriptProc.StartInfo.Arguments = @"c:\windows\system32\killMyApp.vbs"; scriptProc.StartInfo.UseShellExecute = false; scriptProc.Start(); scriptProc.WaitForExit(); scriptProc.Close(); The VBS file is supposed to kill a w3wp.exe process, but never works. There are no errors in the application log. It works locally. I noticed WScript.exe is in task manager every time I run the page, and never goes away. The process WScript.exe (and I tried others such a psexec.exe) is being run as a local user with admin rights (and I tried other types of users including domain admins) when run from IIS, but it works when run from the command line on the server.

    Read the article

  • RewriteRule Works With "Match Everything" Pattern But Not Directory Pattern

    - by kgrote
    I'm trying to redirect newsletter URLs from my local server to an Amazon S3 bucket. So I want to redirect from: https://mysite.com/assets/img/newsletter/Jan12_Newsletter.html to: https://s3.amazonaws.com/mybucket/newsletters/legacy/Jan12_Newsletter.html Here's the first part of my rule: RewriteEngine On RewriteBase / # Is it in the newsletters directory RewriteCond %{REQUEST_URI} ^(/assets/img/newsletter/)(.+) [NC] # Is not a 2008-2011 newsletter RewriteCond %{REQUEST_URI} !(.+)(11|10|09|08)_Newsletter.html$ [NC] ## -> RewriteRule to S3 Here <- ## If I use this RewriteRule to point to the new subdirectory on S3 it will NOT redirect: RewriteRule ^(/assets/img/newsletter/)(.+) https://s3.amazonaws.com/mybucket/newsletters/legacy/$2 [R=301,L] However if I use a blanket expression to capture the entire file path it WILL redirect: RewriteRule ^(.*)$ https://s3.amazonaws.com/mybucket/newsletters/legacy/$1 [R=301,L] Why does it only work with a "match everything" expression but not a more specific expression?

    Read the article

  • Ubuntu 10.04 Windows2003 adding a route for GPO assignment

    - by David Carvalho
    I want the PC's that receive IP from my Ubuntu DHCP3-server to be able to retrieve the GPOs that are on my windows 2003 server. I'm using virtualbox and 3 virtual machines: 1 windows 2003 server 192.168.0.2 with 1 NIC (internal network). 1 ubuntu server 10.04 lts 192.168.0.1 with 1 NIC (internal network) and 3 aliases 192.168.21.0, 192.168.22.0, 192.168.100.0 1 Windows XP machine with 3 NIC's (internal network).

    Read the article

  • how do I set up a virtual host (it's not working, and I've done everything right)

    - by piratepartypumpkin
    My router redirects port 80 to port 8080. My router works fine and my domain name is routed properly. This is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /home/admins/lampstack-5.3.16-0/apps/wordpress ServerName example.com ServerAlias www.example.com </VirtualHost> I can access my website by entering "mywebsite.com:8080" but I cannot access it by entering "mywebsite.com" For further information, this is a part of my httpd.conf: Listen 8080 Servername localhost:8080 DocumentRoot "/home/admins/lampstack-5.3.16-0/apache2/htdocs <Directory /> Options FollowSymLinks AllowOverride None Order deny, allow deny from all </Directory> <Directory "/home/admins/lampstack-5.3.16-0/apache2/htdocs"> Options FollowSymLinks AllowOverride None Order allow, deny allow from all </Directory>

    Read the article

  • Is this distributed database server idea feasible?

    - by David
    I often use SQLite for creating simple programs in companies. The database is placed on a file server. This works fine as long as there are not more than about 50 users working towards the database concurrently (though depending on whether it is reads or writes). Once there are more than this, they will notice a slowdown if there are a lot of concurrent writing on the server as lots of time is spent on locks, and there is nothing like a cache as there is no database server. The advantage of not needing a database server is that the time to set up something like a company Wiki or similar can be reduced from several months to just days. It often takes several months because some IT-department needs to order the server and it needs to conform with the company policies and security rules and it needs to be placed on the outsourced server hosting facility, which screws up and places it in the wrong localtion etc. etc. Therefore, I thought of an idea to create a distributed database server. The process would be as follows: A user on a company computer edits something on a Wiki page (which uses this database as its backend), to do this he reads a file on the local harddisk stating the ip-address of the last desktop computer to be a database server. He then tries to contact this computer directly via TCP/IP. If it does not answer, then he will read a file on the file server stating the ip-address of the last desktop computer to be a database server. If this server does not answer either, his own desktop computer will become the database server and register its ip-address in the same file. The SQL update statement can then be executed, and other desktop computers can connect to his directly. The point with this architecture is that, the higher load, the better it will function, as each desktop computer will always know the ip-address of the database server. Also, using this setup, I believe that a database placed on a fileserver could serve hundreds of desktop computers instead of the current 50 or so. I also do not believe that the load on the single desktop computer, which has become database server will ever be noticable, as there will be no hard disk operations on this desktop, only on the file server. Is this idea feasible? Does it already exist? What kind of database could support such an architecture?

    Read the article

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

  • Wordpress network admin pointing to root as opposed to subdirectory

    - by Ian
    I've installed Wordpress on my nginx server in /blogs and new networks will be in /blogs/blogname. All my main site links point to example.com/blogs, but when I go to network admin the links point to http://www.example.com/wp-admin/network/ instead of http://www.example.com/blogs/wp-admin/network/ Here's the multisite section in my config: define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); $base = '/blogs'; define('DOMAIN_CURRENT_SITE', 'www.example.com'); define('PATH_CURRENT_SITE', '/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); If I try changing PATH_CURRENT_SITE to /blogs, I get a db connection error. Thanks.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >