Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 785/1030 | < Previous Page | 781 782 783 784 785 786 787 788 789 790 791 792  | Next Page >

  • why altgr+p doesn't work, AUTOHOTKEY

    - by voodoomsr
    Hi guys, i try and i try to find the bug in this script but i can't . Maybe some of you can give me a hint... Problem. When i press altgr&p it suppose that the Delete key is triggered, but the weird thing is that after one succesfull delete, if i continue pressing altgr&p appears the p, and the delete isn't triggered anymore. in the meantime i test other solution move to the right and then delete with the backspace, that works, but when i have text selected this alternative isn't good.... here is the code #InstallKeybdHook ;characters very used RAlt & e:: SendInput []{Left} Return RAlt & w:: SendInput <>{Left} Return RAlt & d:: SendInput (){Left} Return RAlt & s:: SendRaw {} SendInput {Left} Return RAlt & x:: SendInput ""{Left} Return RAlt & c:: SendInput ''{Left} Return RAlt & f:: SendInput * Return RAlt & r:: SendRaw + Return RAlt & v:: SendInput - Return ;comienzo y fin de linea RAlt & a:: SendInput {Home} Return RAlt & z:: SendInput {End} Return ;movimientos InEditon /* RAlt & p:: SendInput {Right}{BackSpace} Return */ <^>!p:: Send {Del} Return RAlt & o:: SendInput {Up} Return RAlt & l:: SendInput {Down} Return RAlt & k:: SendInput {Left} Return RAlt & ñ:: SendInput {Right} Return RAlt & ,:: SendInput {Enter} Return RAlt & i:: SendInput {BackSpace} Return ;; clipx ^mbutton:: sendinput ^+{insert} Return ^+k::^+Left +k::+Left ^k::Left +l::+Down ^+l::^+Down ^l::^Down +ñ::+Right ^+ñ::^+Right ^ñ::^Right +o::+Up ^+o::^+Up ^o::^Up +a::+Home ^+a::^+Home +z::+End ^+z::^+End

    Read the article

  • Free antivirus solutions for Windows

    - by kristof
    What free antivirus solutions would you recommend? What are the limitations? What are the dangers of using free versions as opposed to paid solutions? E.g. are they less reliable? As mentioned by Tony, most of the free solutions are limited to personal use so the question will mainly focus on solutions for personal use. See if your antivirus of choice is already listed. Chances are it is. If you spot an answer that mentions one you already use, vote that up if you think it's a good solution. If you know of a feature or drawback not listed, or can include experiences in dealing with it, please edit the answer accordingly. If you know of any that can also be used at work please point this out. This covers all Windows platforms from XP, Vista and Windows 7. If you see an existing entry that needs an update or to add your testimonial, please do.

    Read the article

  • Making libmagic/file detect .docx files

    - by Jonatan Littke
    As seen elsewhere, docx, xlsx and pttx are ZIPs. When uploading them to my web application, file (via libmagic andpython-magic) detects them as being ZIP. I store the contents of the file as a blob in the database, but naturally I don't want to trust the user with what kind of file type this is. So I would like to trust file for and automatically generate a filename during download. I know one can modify /etc/magic but the format (magic(5)) is way too complicated for me. I found a bug report on the issue at Debian bugs but since it's from 2008 it doesn't seem to be fixed any time soon. I guess my only other alternative is to indeed trust the user (but still store the contents as a blob) and only check the file extension based on the file name. This way I can disallow some extensions and allow others. And when the user re-downloads his file, he can have it in whatever way he uploaded it. But this solution is insecure if the file is shared with others, since you can simply rename the file to allow uploading it. Any ideas? Lastly, I found a list of magic numbers for docx etc, but I'm unable to convert these into the magic(5) format.

    Read the article

  • Samsung laptop randomly shuts down

    - by Dmatig
    I've rewritten this question because it turned into an indecipherable mess. I have a Samsung R560 laptop that is overheating, and shutting itself down under load consistantly. Thank you quickcel for reccomending me Speedfan to monitor my temps. Here they are (Load / Idle): (Ignore "Temp1 and Temp2", whatever sensors they are they'd always random, pretty sure they're broke). The load temperature is after just 5 minites of playing Fallout 3 - another 5 minutes and it (the GPU - 9600M GS) consistantly breaches the mid 90's then shuts down, so it's hard to get a good picture of it. I'm looking for some solution or way to decrease these temperatures, because they seem far too high even idle. I've tried: Opening up the case and clearing of all dust with compressed air. Updating drivers for my Graphics card Have purchased and am using a notebook cooler I don't want to: Undervolt / underclock (defeats the point of having a more expensive card) Use lower power / performance settings (again, i might as well have bought something cheaper) Is there anything else i can try (software or inexpensive hardware) that can help me fix this? Has anybody had a Samsung laptop and knows if this can be sorted under my warranty, and the turnaround time of sending it off (UK?)(it has always ran hotter than it should, but now at 6 months old is getting hot enough to power off)

    Read the article

  • Host Primary Domain from a subfolder

    - by TandemAdam
    I am having a problem making a sub directory act as the public_html for my main domain, and getting a solution that works with that domains sub directories too. My hosting allows me to host multiple sites, which are all working great. I have set up a subfolder under my ~/public_html/ directory called /domains/, where I create a folder for each separate website. e.g. public_html domains websiteone websitetwo websitethree ... This keeps my sites nice and tidy. The only issue was getting my "main domain" to fit into this system. It seems my main domain, is somehow tied to my account (or to Apache, or something), so I can't change the "document root" of this domain. I can define the document roots for any other domains ("Addon Domains") that I add in cPanel no problem. But the main domain is different. I was told to edit the .htaccess file, to redirect the main domain to a subdirectory. This seemed to work great, and my site works fine on it's home/index page. The problem I'm having is that if I try to navigate my browser to say the images folder (just for example) of my main site, like this: www.yourmaindomain.com/images/ then it seems to ignore the redirect and shows the entire server directory in the url, like this: www.yourmaindomain.com/domains/yourmaindomain/images/ It still actually shows the correct "Index of /images" page, and show the list of all my images. Here is an example of my .htaccess file that I am using: RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteCond %{REQUEST_URI} !^/domains/yourmaindomain/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /domains/yourmaindomain/$1 RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteRule ^(/)?$ domains/yourmaindomain/index.html [L] Does this htaccess file look correct? I just need to make it so my main domain behaves like an addon domain, and it's subdirectories adhere to the redirect rules.

    Read the article

  • opath syntax to force dynamic distribution group field as numerical comparison? (Exchange 2010)

    - by Matt
    I'm upgrading a (working) query based group (Exchange 2003) to a new and 'improved' dynamic distribution group (2010). For better or worse, our company decided to store everyone's employee ID in the pager field, so it's easy to manipulate via ADUC. That employee number has significance, as all employees are in a certain range, and all contractors are in a very different range. Basically, the new opath syntax appears to be using string compare on my pager field, even though it's a number. Let's say my employee ID is 3004, well, it's "less than" 4 from a string check POV. Set-DynamicDistributionGroup -Identity "my-funky-new-group" -RecipientFilter "(pager -lt 4) -and (pager -like '*') -and (RecipientType -eq 'UserMailbox')" Shows up in EMC with this: ((((((Pager -lt '4') -and (Pager -ne $null))) -and (RecipientType -eq 'UserMailbox'))) -and (-not(Name -like 'SystemMailbox{*')) -and (-not(Name -like 'CAS_{*')) -and (-not(RecipientTypeDetailsValue -eq 'MailboxPlan')) -and (-not(RecipientTypeDetailsValue -eq 'DiscoveryMailbox')) -and (-not(RecipientTypeDetailsValue -eq 'ArbitrationMailbox'))) This group should have max of 3 members right? Nope - I get a ton because of the string compare. I show up, and I'm in the 3000 range. Question: Anyone know a clever way to force this to be an integer check? The read-only LDAP filter on this group looks good, but of course it can't be edited. The LDAP representation (look ma, no quotes on the 4!) - Also interesting it sort of 'fills the' bed with the (pager=4) thing... (&(pager<=4)(!(pager=4))(pager=*)(objectClass=user)(objectCategory=person)(mailNickname=*)(msExchHomeServerName=*)(!(name=SystemMailbox{*))(!(name=CAS_{*))!(msExchRecipientTypeDetails=16777216))(!(msExchRecipientTypeDetails=536870912))(!(msExchRecipientTypeDetails=8388608))) If there is no solution, I suppose my recourse is either finding an unused field that actually will be treated as an integer, or most likely building this list with powershell every morning with my own automation - lame. I know of a few ways to fix this outside of the opath filter (designate "full-time" in another field, etc.), but would rather exchange do the lifting since this is the environment at the moment. Any insight would be great - thanks! Matt

    Read the article

  • Data Store/Volume disconnecting. How to resume copy of VMDK?

    - by Serge
    I'm having an issue with my ESXi 4.1 hosts losing the datastore with FC SAN after a power outage. All 3 hosts disconnect so it's definitely a SAN issue. I've tried to resolve the issue on the SAN side with the SAN software support and Adaptec hardware support. No luck there. So I'm stuck with a SAN that will randomly disconnect the volume. I need to get the virtual machines (VMDK files) from the datastore. The problem is I can only get 5-20% before the data store disconnects. I have backups that are slightly older that I can use to replicate the VMDK differences to. What has not worked so far: Powering up the VMs, will boot up for 5-15 minutes then freeze vCenter migrate or clone of VM, will fail after similar period of time vCenter copy/paste of VMDK. Was able to get one 30GB VMDK and no luck after that. vMware Data Recovery. Fails at low %, can't resume, so next backup starts from begining. Veeam Backup & Recovery. Same as above, no resume function. If I can just find a backup solution that will resume from the failed spot that would solve my issue. Anyone have any ideas that I could try? EDIT 1 The SAN is Open-E DSS 6 running on a Supermicro 24 drive enclosure with 4 port Qlogic FC. Adaptec 52445 RAID card.

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • Handling the Outlook 2007 AutoArchive PST file

    - by Doug Luxem
    We encourage our users to enable AutoArchive in Outlook 2007 as a way to manage their mailbox sizes. However, we frequently end up running in to problems with the archive.pst file that is generated. The two main problems we have are: The archive.pst file is located in the user's local profile directory and is never backed up. A dead hard drive or stolen laptop could result in months or years of missing email. All other personal data is stored on network shares, but we can't do that for Outlook PST files. Without some sort of manual intervention, the archive will grow to enormous sizes. Although Outlook 2007 SP2 handles the large files better than before, it still results in slow response times from Outlook and an increase likelihood of a corrupt PST file. To mitigate these problems personally, I move the archives to a c:\Outlook folder and manually back that up to a shared drive every month or so. Additionally, I rotate archive files every year so that I have one file for each year (archive2008.pst, etc). Obviously, asking our users to do this same wouldn't help much. We need some sort of automated solution to take care of points 1 and 2. I have to imagine this is a common problem for Exchange organizations, so what is the best method to handle this?

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • How Do I Stop NFS Clients from Using All of the NFS Server's Resources?

    - by Ken S.
    I have a v4 NFS server running on Ubuntu 12.04LTS. It is the main repository for the web assets that four external nginx webservers mount to serve up to site visitors. These client servers connect to it via a read-only mount. Each of these RO servers has this displayed when I check the mounts: 10.0.0.90:/assets on /var/www/assets type nfs4 (ro,addr=10.0.0.90,clientaddr=0.0.0.0) The NFS master's /etc/exports file contains entries like this for each server: /mnt/lvm-ext4 10.0.0.40(ro,fsid=0,insecure,no_subtree_check,async) The problem that I'm seeing is that these clients are eventually utilizing all the RAM on the NFS server and causing it to crash. If I do a watch free -m I can watch the used memory creep up until it's used and then see the free buffers/cache entry creep down to near zero before the server eventually locks up requiring a reboot. There is some sort of memory leak somewhere that is causing this, and the optimal solution would be to find it and fix it, but in the meantime I need to find a way to have the NFS server protect itself from connected clients using all it's RAM. There must be some sort of setting that limits the resources the clients can use, but I can't seem to find it. I've tried adjusting the values for rsize and wsize but they don't seem to help or be related. Thanks for any tips.

    Read the article

  • Postfix cannot deliver mail to Cyrus mailbox on Ubuntu 11.10 server

    - by user105804
    I have installed and configured Postfix and Cyrus IMAP server with webcyradm according to this document - http://www.delouw.ch/linux/Postfix-Cyrus-Web-cyradm-HOWTO/html/index.html . I can access webcyradm interface, I can create new domains and new users, and I can login via IMAP after creating the user account. However, Postfix fails to deliver mail to cyrus mailboxes. Mail log contains errors shown below. Installing any IMAP server other than cyrus is not an option because it is needed by the web application. Please advise me how to make Postfix deliver email to cyrus mailboxes. The solution should not necessary include web-cyradm, but there should be a web interface for managing mail domains and mailboxes as user-friendly as possible. Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: accepted connection Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: lmtp connection preauth'd as postman Dec 30 22:46:17 acer-tower postfix/cleanup[4868]: 065D5240035: message-id=<[email protected]> Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: verify_user(user.imap0001) failed: Mailbox does not exist Dec 30 22:46:17 acer-tower postfix/bounce[4867]: 6C6CA24185C: sender non-delivery notification: 065D5240035 Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 065D5240035: from=<>, size=3372, nrcpt=1 (queue active) Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 6C6CA24185C: removed Dec 30 22:46:17 acer-tower postfix/lmtp[4866]: 53421240372: to=<[email protected]>, orig_to=<[email protected]>, relay=home.webshop-software.ch[/tmp/lmtp], delay=165, delays=165/0.02/0.17/0.09, dsn=5.1.1, status=bounced (host home.webshop-software.ch[/tmp/lmtp] said: 550-Mailbox unknown. Either there is no mailbox associated with this 550-name or you do not have authorization to see it. 550 5.1.1 User unknown (in reply to RCPT TO command))

    Read the article

  • IIS serving pages extremely slowly

    - by mos
    TL;DR: IIS 7 on WS2008R2 serves pages really slowly; everyone assumes it's because it's IIS and we should have gone with an Apache solution on Linux. I have no idea where to start debugging the problem. I work in a nearly all-MS shop with a bunch of fellow programmers who think Linux is the One True Way. Management recently added a Windows machine with IIS to serve Target Process (third-party agile system), but the site runs extremely slowly. Everyone, to a man, assumes it's because it's on IIS, and if only management would grow a brain and get some Linux servers in here, we could really start cleaning things up! ...Right. Everyone "knows" IIS isn't fit to serve .txt files. ...Well, as the only non-Microsoft hater in the bunch, I am apparently the only one who thinks maybe the Linux guy who hated being told to set up the IIS server may have screwed things up. I'd like to go fix it, but I don't have any clue as to where to start as I am not a sys admin. Help?

    Read the article

  • IIS6 Sending a 404 for ".exe" files.

    - by Tracker1
    Recently a bunch of files I had setup for download via IIS6's web server stopped working. They are a number of setup files ending in ".exe" and were working prior to a few months ago. I have the file permissions set properly, and even enabled browsing in IIS to determine that the paths are indeed correct. I'm not sure if it is related, but the directories with a period stopped working as well. ex: "~/download/ApplicationName/0.9/AppName-setup-0.9.123b2.exe" When I rename the directory to say 0_9 the browsing works, but the file itself delivers a 404 message from IIS. For now, I've setup FileZilla FTP for anonymous access to these files, but would prefer to continue using IIS. I've considered creating an HTTP handler to serve the .exe files, but would really prefer a configuration solution. I just can't figure out why it isn't working, as all the settings are correct. Directory is setup for read access. "Everyone" has read permissions on the files themselves, and the directory browsing (aside from the folder "0.9" to "0_9" rename) shows the files.

    Read the article

  • Evernote from vim

    - by juanpablo
    I search a way to edit evernote notes from vim I begin with this #!/bin/bash evernoteDir="$HOME/Library/Application*Support/Evernote/data" dataDir=$(ls -trlh $evernoteDir| tail -n 1| awk '{print $NF}') contentDir="$evernoteDir/$dataDir/content" file=$(ls -trlh $contentDir | tail -n 1| awk '{print $NF}') vim -c 's/div>/div>\r/g' $contentDir/$file/content.html https://gist.github.com/1256416 or maybe create a vim plugin for this ... you have any suggestion? EDIT: for a more simple edition of the evernote note in html format, I make this vim function " Markup function {{{ fun! MkdToHtml() "{{{ " markdown to html silent! execute '%s/ $/<br\/>/g' silent! execute '%s/\*\*\(.*\)\*\*/<b>\1<\/b>/g' silent! execute '%s/\t*###\(.*\)/<H3>\1<\/H3>/g' endf "}}} command! -complete=command MkdToHtml call MkdToHtml() nn <silent> <leader>mm :MkdToHtml<CR> " }}} and a vim function for open the last note edited fun! LastEvernote() "{{{ " a better solution is with evernote api let evernoteDir=expand("$HOME")."/Library/Application*Support/Evernote/data" let dataDir=system("ls -trlh ".evernoteDir."| tail -n 1| awk '{print $NF}'") let contentDir=evernoteDir."/".dataDir."/content" let contentDir=substitute(contentDir,"\n","",'g') let note=system("ls -trlh ".contentDir." | tail -n 1| awk '{print $NF}'") let note=substitute(note,"\n","",'g') sil! exec 'sp '.contentDir.'/'.note.'/content.html' sil! exec '1s/>/>\r/g' sil! exec '%s/<br.*\/>/<br\/>\r/g' sil! exec '%s/<\//\r<\//g' sil! exec 'g/^\s*$/d' normal gg sil! exec '1,4fo' sil! exec '$-1,$fo' endf https://gist.github.com/1289727

    Read the article

  • Problem in accessing Windows shared folder on Ubuntu using terminal

    - by vikramtheone
    Hi Guys, Description I have 2 systems with me, one running on Windows(Host) and one on Ubuntu, both on a LAN. On the Windows(Host) I develop software intended for the Linux system and because the Linux system has little external memory, my idea to overcome this is by making the project folder on the Host side a Shared Folder with full access and access it on Ubuntu over the network. To achieve this, I have installed Samba on Ubuntu, when I go to Places -> Network I can see the shared project folder and I simply mount it. A link appears on the desktop. Next, using Nautilus I open the link and I can access the contents of the shared folder. Problem Even though I mount the shared project folder, I don't see it appearing in the /media or the /mnt folder, as a result of this I don't know what path to use to access this folder, from the terminal. For example: When, I mounted my USB stick, as expected, a link for the device appears on the Desktop and I also see a folder in the media folder. So, similarly, a mounted shared folder should have appeared on the /mnt folder, too. Can anyone suggest what I should do now? There are many posts around, but no solid solution for this problem. Help!!! :) Vikram

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • mysql.sock problem on Mac OS X, all Zend products

    - by Michael Stelly
    Hi folks. I posted this on the Zend forum, but I'm hoping I can get a speedier reply here. I've tried every solution provided on this forum with no luck. When I restart mysql, everything appears ok. sudo /usr/local/zend/bin/zendctl.sh restart Password: /usr/local/zend/bin/apachectl stop [OK] /usr/local/zend/bin/apachectl start [OK] Stopping Zend Server GUI [Lighttpd] [OK] spawn-fcgi: child spawned successfully: PID: 7943 Starting Zend Server GUI [Lighttpd] [OK] Stopping Java bridge [OK] Starting Java bridge [OK] Shutting down MySQL . SUCCESS! Starting MySQL . SUCCESS! Pinging locahost is also OK and resolve dns to IP. ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.048 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.066 ms 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.076 ms 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.064 ms But when I attempt to access the local url for my app, I get the dreaded: Message: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2). This is a show-stopper for me. I appreciate any assistance. Thank you.

    Read the article

  • Daisy-chainable DisplayPort Monitors

    - by thepurplepixel
    I am the proud new owner of a MacBook Pro with a mini-DisplayPort. My desk setup used to allow me to position the screen of my old MacBook beside an external monitor, essentially allowing dual-head. It was also advantageous that my old MBP and my external monitor had the same resolution, 1440x900. Now, I'm searching for a set of two monitors that I can use a HengeDock with. Unfortunately, the MacBook Pro suffers from having only one mini-DisplayPort. Looking up the spec for DisplayPort 1.2 (which the MBP supports), DisplayPort daisy chaining is supported. What I'm looking for is a monitor that has two DisplayPorts so I can daisy chain two monitors off the single mini-DisplayPort. What I don't want is a USB-based video solution. I need full acceleration on both monitors; an external video card won't cut it. I hope I don't have to wait a few years for these monitors to come out. TL;DR: I need two monitors with two DisplayPorts each that I can daisy-chain. Thanks!

    Read the article

  • Apache Server Redirect Subdomain to Port

    - by Matt Clark
    I am trying to setup my server with a Minecraft server on a non-standard port with a subdomain redirect, which when navigated to by minecraft will go to its correct port, or if navigated to by a web browser will show a web-page. i.e.: **Minecraft** minecraft.example.com:25565 -> example.com:25465 **Web Browser** minecraft.example.com:80 -> Displays HTML Page I am attempting to do this by using the following VirtualHosts in Apache: Listen 25565 <VirtualHost *:80> ServerAdmin [email protected] ServerName minecraft.example.com DocumentRoot /var/www/example.com/minecraft <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/example.com/minecraft/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> <VirtualHost *:25565> ServerAdmin [email protected] ServerName minecraft.example.com ProxyPass / http://localhost:25465 retry=1 acquire=3000 timeout=6$ ProxyPassReverse / http://localhost:25465 </VirtualHost> Running this configuration when I browse to minecraft.example.com I am able to see the files in the /var/www/example.com/minecraft/ folder, however if I try and connect in minecraft I get an exception, and in the browser I get a page with the following information: minecraft.example.com:25565 -> Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /. Reason: Error reading from remote server Could anybody share some insight on what I may be doing wrong and what the best possible solution would be to fix this? Thanks.

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • HP Pavillion DV6500 recovery disk failure

    - by Scott W
    I recently attempted to re-install Windows Vista on an HP Pavillion DV6500 using the factory recovery DVD's, but encountered a strange problem. When the recovery disk attempted to reformat the hard disk, it failed at 22%. The error message provided was not very informative, just the error code "0x400110020000 1005". A google search turned up some people with a similar problem who asserted that HP has been know to ship corrupted recovery DVDs. The recovery disk did manage to reformat the the recovery partition before failing though, so recovering from the partition is no longer an option. It would be possible to reinstall from an off-the-shelf retail copy of Vista and then pull the drivers from HP's website, but I don't have access to a copy of Vista, and it would really be outrageous to have to purchase a new OS when I have a perfectly valid license already. Thought about biting the bullet and upgrading to Windows 7, but my understanding is that without Vista installed I'd be unable to use the upgrade version, and be forced to purchase the more expensive non-upgrade retail copy (!). Can anyone suggest a possible solution to this Catch-22? I've run out of ideas.

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Windows 7 BSOD when changing power plan

    - by dd5
    i have a strange problem. When i want to change the power plan on my laptop from High performance to Balanced, Windows freezes and i get bsod. The power plan settings are all default. Laptop specs: - Intel Core i3 330M/350M - Intel® HM55 Express Chipset - DDR3 1066 MHz SDRAM 8GB - ATI Mobility™ Radeon HD5730 1GB DDR3 VRAM - Intel SSD330 128gb - Windows 7 Home premium I've searched the internets but couldnt find a similar issue. BSOD first started when i installed this SSD and stopped when i've updated the chipset controller driver then started again yesterday when i wanted to change the power settings plan.Minidump file here. Any help with this weird issue appriciated, thanks. Edit: - i've ran Memory diagnostic tool, - Intel SSD diagnostics - and updated the firmware to 3.2.1. Non of these steps worked or shown signs of errors - but still got BSOD when changing power plan settings. After analizing the dump file via osronline.com here a first few lines: CRITICAL_OBJECT_TERMINATION (f4) A process or thread crucial to system operation has unexpectedly exited or been terminated. Several processes and threads are necessary for the operation of the system; when they are terminated (for any reason), the system can no longer function. Arguments: Arg1: 0000000000000003, Process Arg2: fffffa8008661b30, Terminating object Arg3: fffffa8008661e10, Process image file name Arg4: fffff800033de270, Explanatory message (ascii) -- Solution -- Provided by Vinayak: After installing the Intel Rapid storage Technology from MajorGeeks, i didn't experience a BSOD since, thank you :)

    Read the article

  • Dell Driver Support for Latitude E6320 Windows 7 Enterprise

    - by IamPolaris
    I recently did a reinstall of Windows 7 Enterprise on a Dell Latitude E6320, which is a 64 bit system. After the install process, and doing typical Windows Update stuff, I looked at my Device Manager and found that I had devices which were missing drivers. My missing drivers: After going to the Dell Support site and looking at the files, and doing some sleuthing I found the following support document: http://downloads.dell.com/utility/Latitude%20E-Family%20%20Mobile%20Precision%20Re-Image%20How-To%20Guide%20-%20A03%20Rev%203%200.pdf This document hints in appendix C that the Broadcom USH is the Control Point Security and the Unknown device is Micro freefall sensor. The network controller is my wireless, as I cannot connect wirelessly, and the final missing driver I am not sure. Attempting to install the control point security exe on the support page will not work. After downloading, I am given the message that I am attempting to install a 32 bit driver on a 64 bit machine EVEN THOUGH I selected the win7 64 bit option from the support page. Beyond that, some of the drivers (Which are confusing to read and hard to understand what they do) and the system utilities which are supposedly supposed to make this process simpler will either a) not run because they are 32 bit exe's or b) the support page cannot find the file attempted to download. Is there anything I can do to get (at the very least) my wireless running, but idealistically all of my drivers. A solution which assumes Dell is completely incompetent would be ideal. :P Some forums have said that I should download the chipset driver, others say to get the system utility file (DSS_UTIL_WIN_R282536.EXE). I have had no luck as of yet...

    Read the article

< Previous Page | 781 782 783 784 785 786 787 788 789 790 791 792  | Next Page >