Search Results

Search found 56562 results on 2263 pages for 'gerald fauteux@oracle com'.

Page 1118/2263 | < Previous Page | 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125  | Next Page >

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Setting Remote Desktop to allows IPv6 connections

    - by Garrett
    Setup: Basically I have 3 machines (2 virtual and 1 physical) that I would like to be able to RDP in to from outside my NAT (a router). The VMs are Windows 7 and Windows XP, both fully patched with Teredo installed and working, both running in VirtualBox (their host also has Teredo working, though I'm not sure if that matters). They both have bridged network adapters with promiscuous mode enabled. The physical machine is Windows 7 fully patched with an HFS server running on it and a dynamic DNS set up for my public IPv4 address and port forwarded. It also has Teredo installed and working. Symptoms: According to http://test-ipv6.com/ all 3 have public IPv6 addresses, and they can all connect to http://ipv6.google.com/. I can ping the XP VM from the host it's running on but I cannot ping it from any other machine. Also, I cannot ping either of the other machines from anywhere. I cannot connect to any of them over RDP from IPv6, however I can connect to all of them through IPv4. Any ideas what is going wrong?

    Read the article

  • Mac OS X in Virtualbox says "You need to restart your computer"

    - by humoeba
    I've been trying to figure out for the past week how to get Snow Leopard reliably running in a VM. Right now I am using VirtualBox, and it runs fine for a while, but every once in a while (happened 3 times in the last few hours) I get the "You need to restart your computer" message. Unfortunately, it hasn't even lasted long enough to finish installing the operating system yet. I first tried VMWare, which was a pain to set up. I got it running ok, operating system installed, with the guest tools. Every once in a while though, it just stops running. I click inside the VM, and there's no mouse. It doesn't respond to keyboard input either. I have to reset the VM to get a response. I'm wondering if this is the same error. This happens with both Workstation and Player. Here is the tutorial I used for VirtualBox: http://www.sysprobs.com/iboot-loader-virtualbox-install-snow-leopard Here's the tutorial I used for VMWare: http://bobhood.wordpress.com/2009/12/18/welcome-to-snow-leopard-mac-os-x-10-6-and-vmware-workstation-7/ I'm using an iso for Mac OS X 10.6.3. I have an HP Pavilion dm4 with an Intel Core i7 M640 running Windows 7; VT is turned on. Using VirtualBox 4.0.4 and VMWare Workstation 7.0.1 and VMWare Player 3.0.1 Does anyone know what might be causing this error or how I can fix it? Thanks.

    Read the article

  • Cisco ASA site-to-site vpn not initiating phase 1 (not sending udp 500 packets)

    - by Sean Steadman
    I am hoping someone here can help me with my problem. I am trying to setup an IPSEC site-to-site VPN between two cisco ASA 5520's in GNS3 (both using 8.4.2). I have been unsuccesful in getting the tunnel up and it appears neither ASA is sending packets out,in regards to phase 1 and phase 2 (tested by using wireshark and seeing NO udp 500 packets). Doing show ipsec sa and such shows nothing. CALIFORNIA(config)# show ipsec sa There are no ipsec sas FLA-ASA# show ipsec sa There are no ipsec sas I will attach both configurations in two different pastebin files as to keep this post a bit cleaner. Essentially California side has 172.20.1.0/24 and Florida side has 10.10.10.0/24. California ASA config: http://pastebin.com/v0pngYzF Florida ASA config: http://pastebin.com/E2geybta Please let me know if there is any other vital information that could help. I have gotten IPSEC tunnels to work using openSwan (linux) and cisco routers but cannot for the life of me get ASA IPSEC tunnels to work. The ASDM is out of the question I only use cli. Thanks for any useful help!

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • Anonymous Login attemps from IPs all over Asia, how do I stop them from being able to do this?

    - by Ryan
    We had a successful hack attempt from Russia and one of our servers was used as a staging ground for further attacks, actually somehow they managed to get access to a Windows account called 'services'. I took that server offline as it was our SMTP server and no longer need it (3rd party system in place now). Now some of our other servers are having these ANONYMOUS LOGIN attempts in the Event Viewer that have IP addresses coming from China, Romania, Italy (I guess there's some Europe in there too)... I don't know what these people want but they just keep hitting the server. How can I prevent this? I don't want our servers compromised again, last time our host took our entire hardware node off of the network because it was attacking other systems, causing our services to go down which is really bad. How can I prevent these strange IP addresses from trying to access my servers? They are Windows Server 2003 R2 Enterprise 'containers' (virtual machines) running on a Parallels Virtuozzo HW node, if that makes a difference. I can configure each machine individually as if it were it's own server of course... UPDATE: New login attempts still happening, now these ones are tracing back to Ukraine... WTF.. here is the Event: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB4FEB30C) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: REANIMAT-328817 Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 94.179.189.117 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is one from France I found too: Event Type: Success Audit Event Source: Security Event Category: Logon/Logoff Event ID: 540 Date: 1/20/2011 Time: 11:09:50 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: QA Description: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB35D8539) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: COMPUTER Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 82.238.39.154 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Changing the current URL but serving content from another (same domain) - ProxyPass?

    - by zigojacko
    I've been banging my head against the wall with this for months now so I hope someone on here will be able to finally advise what is needed for this. I have some URL's like this:- domain.com/category/subcat/filter/brand And I wish to rewrite the URL's to:- domain.com/category/brand-subcat Content loads fine at the first URL, I just want to show it at a different URL - is URL masking the correct term for this? I have a RewriteRule in .htaccess that should do this job as far as I believe:- RewriteRule ^([a-zA-Z]+)/([a-zA-Z]+)/filter/([a-zA-Z]+)$ $1/$3-$2 This isn't actually modifying the URL at all though on a Magento website (mod_rewrite is enabled and plenty of other rewrites are working from the same .htaccess). So firstly, I want to know is what I am trying to achieve definitely possible? If so, what is this process even called? Secondly, does this need to be handled using ProxyPass and then use a [P] flag with the rewrite rule? I assume the Apache server doesn't have mod_proxy enabled currently because when I add a [P] flag, the URL returns a 403 forbidden error with the full server path for the current URL. Please could anyone kindly advise what on earth I need to do to achieve this?

    Read the article

  • Why is it a bad idea to use a customer email as the from address

    - by Crab Bucket
    I've got an application that emails users once they have filled in a form. It uses a no-reply@customerdomain.com as a from address. The customer wants it to use the email from the form as the from address which could be anything. I have been told that this is a bad idea due to spoofing/blacklisting and spam. I feel really vague about the exact reason about why this is a bad idea particularly as i've got to try to counsel the client out of this. Can someone explain to me why this is a bad idea. Interestingly the client has used a gmail account as the from address as a demo which not only works fine but has enabled the application to start sending emails (it wouldn't do it before with an email which was [email protected]). Erm - what is going on. I'm told one thing and the opposite works. Sorry - i know this is basic but I could find anything on a google search. Largely I think because I'm having trouble even framing the question. EDIT Thank you everyone - great answers. Interestingly the server sending the email and the mail box that it is going to are both behind the same firewall so the client says they are unconcerned about spam. Oh well.

    Read the article

  • Lots of strange IP addresses in my Windows Firewall logs. Concern?

    - by gmoore
    Was trying to debug a Samba sharing issue with Mac OS X so I turned on logging for my Windows Firewall. I didn't expect a lot of conections but the thing filled up quickly. Here's a sample: 2009-12-21 08:49:32 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56335 139 - - - - - - - - - 2009-12-21 08:49:33 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56337 139 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.73.242 1389 53 - - - - - - - - - 2009-12-21 08:50:02 CLOSE TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.71.226 60290 53 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:04 CLOSE TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:41 CLOSE UDP 192.168.0.3 192.168.0.4 137 50300 - - - - - - - - - I can pick out the local IP addresses (192.168.0.3 is my Windows XP machine, 192.169.0.4 is Mac OS X) as I debug the Samba issue. But some of the others resolve to Comcast (my ISP) and others resolve to weird hosts like van-dns.com and navisite.net. It doesn't look like any connection sent/received any bytes. I used the reference here: http://technet.microsoft.com/en-us/library/cc758040%28WS.10%29.aspx. Is it a cause for concern?

    Read the article

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • Test script if host is back online

    - by brubelsabs
    E.g. system: Ubuntu/Debian. As many of you do this probably via ping and a terminal, I always forget this terminal when switching to other task... So a noftification popup would be useful. So can I do better as this?: while; do if ping -c 1 your.host.com; expr $? = 0; then notify-send "your.host.com back online"; sleep 30s; else sleep 30s; fi; done You will need zsh and libnotify to let the snippet work. As script: #!/usr/bin/env zsh while; do if ping -c 1 $1; expr $? = 0; then notify-send "$1 back online"; sleep 30s; else sleep 30s; fi; done

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • How to get http requests details in a tcpdump?

    - by tucson
    I am trying to get a tcpdump trace of some http requests. Here is what I got so far (I replaced the real IP addresses with REMOTE and LOCAL): C:\>Windump -na -i 3 ip host REMOTE and ip src LOCAL and tcp port 80 Windump: listening on \Device\NPF_{8056BE5E-BDBB-44E6-B492-9274B410AD66} 13:13:34.985460 IP LOCAL.4261 > REMOTE.80: . 1784894764:1784894765(1) ack 1268208398 win 65535 13:13:38.589175 IP LOCAL.4302 > REMOTE.80: F 3708464308:3708464308(0) ack 982485614 win 65535 13:13:38.589285 IP LOCAL.4303 > REMOTE.80: F 890175362:890175362(0) ack 2462862919 win 65535 13:13:38.589330 IP LOCAL.4304 > REMOTE.80: F 1838079178:1838079178(0) ack 156173959 win 65535 13:13:38.589374 IP LOCAL.4305 > REMOTE.80: F 3952718843:3952718843(0) ack 2209231545 win 65535 13:13:38.589413 IP LOCAL.4306 > REMOTE.80: F 446105750:446105750(0) ack 3141849979 win 65535 13:13:38.590265 IP LOCAL.4302 > REMOTE.80: . ack 2 win 65535 13:13:38.590403 IP LOCAL.4304 > REMOTE.80: . ack 2 win 65535 13:13:38.590429 IP LOCAL.4303 > REMOTE.80: . ack 2 win 65535 13:13:38.590484 IP LOCAL.4305 > REMOTE.80: . ack 2 win 65535 13:13:38.590514 IP LOCAL.4306 > REMOTE.80: . ack 2 win 65535 But I do not get the following level of details: Request URL:http://domain.com/index.php Request Method:POST Status Code:200 OK POST /index.php HTTP/1.1 Host: domain.com Connection: keep-alive Content-Length: 151 Cache-Control: max-age=0 etc How can I get this level of data?

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 and: System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium 64-bit. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about four months. Windows Updates aside, the system is usually quite stable.

    Read the article

  • Iptables ignoring a rule in the config file

    - by Overdeath
    I see lot of established connections to my apache server from the ip 188.241.114.22 which eventually causes apache to hang . After I restart the service everything works fine. I tried adding a rule in iptables -A INPUT -s 188.241.114.22 -j DROP but despite that I keep seeing connections from that IP. I'm using centOS and i'm adding the rule like thie: iptables -A INPUT -s 188.241.114.22 -j DROP Right afther that I save it using: service iptables save Here is the output of iptables -L -v ` Chain INPUT (policy ACCEPT 120K packets, 16M bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any c-98-210-5-174.hsd1.ca.comcast.net anywhere 0 0 DROP all -- any any c-98-201-5-174.hsd1.tx.comcast.net anywhere 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any www.dabacus2.com anywhere 0 0 DROP all -- any any 116.255.163.100 anywhere 0 0 DROP all -- any any 94.23.119.11 anywhere 0 0 DROP all -- any any 164.bajanet.mx anywhere 0 0 DROP all -- any any 173-203-71-136.static.cloud-ips.com anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any 74.122.177.12 anywhere 0 0 DROP all -- any any 58.83.227.150 anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 186K packets, 224M bytes) pkts bytes target prot opt in out source destination `

    Read the article

  • Exchange 2010 update timezone of all calendar items

    - by Andrew
    We are currently operating Exchange 2010 server with Outlook 2010 clients on a ship. We have just changed timezones for the first time in quite a while today. Is there any way to rebase all the calendars and/or update all the calendar items to the new timezone at the same time? I have looked at the following tools already. Microsoft Exchange Calendar Update Configuration Tool - http://www.microsoft.com/en-us/download/details.aspx?id=6266 (Doesn't support exchange 2010) Time Zone Data Update Tool for Microsoft Office Outlook - http://www.microsoft.com/en-us/download/details.aspx?id=17291 The Time Zone Data Update Tool for Microsoft Office Outlook does work for individual users, but has some serious downsides. Including each user needs to run it (approx 400 users), and also it only seems to work on the default account in Outlook 2010, a lot of our users have role accounts as well that we would need to run the tool on. The only way I can find to get this tool to run on the role accounts is to make the role account the default account in outlook, and that in itself is quiet an involved process especially if you have 2 or 3 role accounts. So is there a way to just change all calendar items on our Exchange server to a different timezone in one go? We are a little unique in terms of the whole organisation can change timezones over night, meeting rooms and all, but surely a product as advanced as Exchange 2010 allows us to do what we need.

    Read the article

  • Who should I run mysql as, on a personal computer?

    - by user664833
    I just installed mysql via homebrew (with brew install mysql, on Mac OS X Mountain Lion - recently installed from scratch). Following the installation, there is a "caveats" section with options around further necessary actions to take: ==> Caveats Set up databases to run AS YOUR USER ACCOUNT with: unset TMPDIR mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp To set up base tables in another folder, or use a different user to run mysqld, view the help for mysqld_install_db: mysql_install_db --help and view the MySQL documentation: * http://dev.mysql.com/doc/refman/5.5/en/mysql-install-db.html * http://dev.mysql.com/doc/refman/5.5/en/default-privileges.html To run as, for instance, user "mysql", you may need to `sudo`: sudo mysql_install_db ...options... Start mysqld manually with: mysql.server start Note: if this fails, you probably forgot to run the first two steps up above A "/etc/my.cnf" from another install may interfere with a Homebrew-built server starting up correctly. To connect: mysql -uroot To launch on startup: * if this is your first install: mkdir -p ~/Library/LaunchAgents cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist * if this is an upgrade and you already have the homebrew.mxcl.mysql.plist loaded: launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist You may also need to edit the plist to use the correct "UserName". On previous versions of Mac OS X I ran mysql as mysql user, but now I am confronted by the idea of running it as myself. I am the only one who uses this computer (which happens to be my laptop), and I do programming for work and for pleasure. What are the pros & cons, or best practices, around choosing whether to run mysql AS YOUR USER ACCOUNT or as mysql or something else still?

    Read the article

  • Would an array of SSD drives be able to succesfully substitute the system memory?

    - by Florin Mircea
    I watched a few videos trying to answer this. This video (youtube.com/watch?v=eULFf6F5Ri8) shows a bunch of guys stacking 24 SSD's reaching a peak of around 2GBps r/w. That's under the limit of the worst DDR3 in this list (memorybenchmark.net/write_ddr3_amd.html) - that shows DDR3 memory performance varying from 2.78 to 6.55 Gb per second, but that video is over 3 years old. This video (youtube.com/watch?v=27GmBzQWwP0) shows a more optimistic situation, but for PCI-E SSD drives: 5 drives peaking at around 4Gb. And this other video shows that stacking up more than 3 SSD's doesn't realistically offer a substantial added performance. This and the fact that in all benchmarks the drives act quite poorly when dealing with small files (5k file read/write averaging from 10MB to around 30-40MBps) as opposed to how native memory handles such files, seems to indicate a definite NO to this question. Also, the write life cycle is indeed limited and the drives might wear out quickly, as kindly pointed out by paddy. However, I wanted to get more opinions on this. Would it be possible to at least obtain current memory performance with SSD's in RAID 0? And if so, in what circumstances? I am assuming using this configuration with a Windows OS that has a memory pagefile resident to that stack of SSD's, thus making it very fast to work with.

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Is there any way for ME to improve routing to an overseas server? [migrated]

    - by Simon Hartcher
    I am trying to make a connection to a gaming server in Asia from Australia, but my ISP routes my connection through the US. Tracing route to worldoftanks-sea.com [116.51.25.54]over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.1.1 2 34 ms 42 ms 45 ms 10.20.21.123 3 40 ms 40 ms 43 ms 202.7.173.145 4 51 ms 42 ms 36 ms syd-sot-ken-crt1-ge-6-0-0.tpgi.com.au [202.7.171.121] 5 175 ms 200 ms 195 ms ge5-0-5d0.cir1.seattle7-wa.us.xo.net [216.156.100.37] 6 212 ms 228 ms 229 ms vb2002.rar3.sanjose-ca.us.xo.net [207.88.13.150] 7 205 ms 204 ms 206 ms 207.88.14.226.ptr.us.xo.net [207.88.14.226] 8 207 ms 215 ms 220 ms xe-0.equinix.snjsca04.us.bb.gin.ntt.net [206.223.116.12] 9 198 ms 201 ms 199 ms ae-7.r20.snjsca04.us.bb.gin.ntt.net [129.250.5.52] 10 396 ms 391 ms 395 ms as-6.r20.sngpsi02.sg.bb.gin.ntt.net [129.250.3.89] 11 383 ms 384 ms 383 ms ae-3.r02.sngpsi02.sg.bb.gin.ntt.net [129.250.4.178] 12 364 ms 381 ms 359 ms wotsg1-slave-54.worldoftanks.sg [116.51.25.54] Trace complete. Since I think it will be unlikely that my ISP will do anything, are there any ways to improve my routing to the server without them having to intervene? NB. The game runs predominately over UDP, so I believe most low ping services are out of the question, as they rely on TCP traffic.

    Read the article

  • SPF for two different outgoing servers?

    - by Marcus
    I have ran into a problem that I think someone should have a really clever answer for. Today we have our own mailserver that looks like "mail.domain.com" – which we use to send out mail to our customers (with a modified PHPMailer script). Usually around 5000 mails every day. Everything from customer support to invoices goes through there. The from-header is set to "customer-service@domain.com". We are now thinking of migrating to Google Apps for internal use (with 70+ users). However, we cannot use Gmails SMTP for sending "bulk" mails (they have a limit of 500 outgoing mails per day) so we really want to keep using our current system for sending automated mail to our customers – and using gmails SMTP for our internal use. So, how do we set up our SPF-records (Sender Policy Framework) for this? We do not want to get stuck in any filters for "spoofing" the sender from either type of account (the ones sent from our own server, and through Gmails). In short: we want to be able to use the same e-mail adress (for sending) on two different SMTP servers (and therefore two different IP-adresses). Anyone with a good knowledge off SPF who knows how to go about? Or if it is even possible? Anything else I should think of when switching to Google Apps?

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • iTunes Title Bar is Equal to the Location of the Library file?

    - by Urda
    I have never been able to get a clear answer on why the latest edition of iTunes does this. I have my entire iTunes library located in C:\itunes\ and the library data files inside C:\itunes\!library_info for backup purposes. However when version 9 of iTunes came out it went from having iTunes as the title, to !library_info. Anyway to get around this without moving my data files away? Annoying "feature" if that is what it is. Again Apple support and forums were of no help to me. Anyone have insight on this? System Info: Windows Vista x86 Ultimate, latest updates. iTunes Version 9.0.3.15 Screenshot: http://farm5.static.flickr.com/4072/4414557544_d0b25eb64c_o.jpg Flickr Page: http://www.flickr.com/photos/urda/4414557544/ UPDATE: I put a bounty on this, and would be open to hacking the registry or doing a custom config. Please help me fix my title bar!!! Update2: I would not like to move my library files out of itunes\!library_info to avoid them inter-mingling with my music library.

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • Can't access random web pages on my MacBook Pro 2012?

    - by Faruk Sahin
    Sometimes I can't access random web pages. The page simply doesn't load. If I wait for around a minute doing nothing, it will load. It happens very random and very intermittent. Sometimes it starts when I try to access youtube.com or cnn.com. When it starts, it happens once in a minute or once in 5 minutes for random web pages. But if I am downloading something, the download continues without any interruption. And also I am able to ping the address I can't browse. Then if I wait for around a minute, everything starts to work fine at the browser side also. I have tried a lot of different browsers. I have tried changing my DNS servers to Google's public DNS servers. Using a cable instead of the wireless connection doesn't work either. No one else in the network has this problem, but me. What can be the problem?

    Read the article

< Previous Page | 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125  | Next Page >