Search Results

Search found 11318 results on 453 pages for 'josh close'.

Page 315/453 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • Ubuntu 10.04 RGBA(Transparent theme) bug

    - by Nrew
    I tried to follow this tutorial frome ghacks.net. But I end up with a bug. Everytime I try to change the desktop background or the theme. It opens up lots of folders. And then close it back again. So I cannot do anything when I try to change the background or the theme to the default. Here is the tutorial And I can't even shutdown my machine now, please help. EDIT I tried executing these commands in the terminal sudo add-apt-repository ppa:erik-b-andersen/rgba-gtk sudo apt-get update && sudo apt-get upgrade sudo apt-get install gnome-color-chooser gtk2-module-rgba sudo apt-get install murrine-themes And it installed gnome color changer. It works, and the windows became transparent, but I can't change to the default theme and I can't put a background Here's the screenshot, when I try to change something in the preferences: Its just okay if I can't change the theme, but it won't let me change the background either. Or maybe the background is changed but its transparent just like the other windows. You can see that in the taskbar(or whatever its called in Ubuntu). Its full. What can I do to atleast make the background visible.

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • Enrich a dataset of POIs with OpenStreetMap

    - by zero
    update: due to some hints of users - eg oliver Salzburg and slhck i have been aware of gis.stackexchange.com - so i moved the topic on my own: Plz can you or somebody who has the permission close the article - since we do not need this topic on two sites. Thx for your work. KEEP up the service here! STACK-sites rock. I have a list of POIs, some with a full description and some with only a few data entries, like the following: 6.9441000 50.9242000 [50677] (Ital) Casa di Biase [Köln] 6.9373600 50.9291800 [50674] (Ital) Al Setaccio [Köln] However, I need the full dataset. Can I get this somewhere? If I have all the position data, is it possible to find the rest? a. name of the street b. name of the town So for example, the data should finally look like this: 10.5346100 52.1613600 [38300] (Chin) Wanbao Kommissstr.9 [Wolfenbüttel] 13.2832500 52.4422600 [14167] (Ital) LaPergola Unter den Eichen 84d [Berlin] 13.3177700 52.5062900 [10625] (Chin) Good Friends Kantstr.30 [Berlin] Can I do this with OpenStreetMap? Should I parse OpenStreetMap data? Or OpenBabel?

    Read the article

  • Kernel Memory Leak in Ubuntu 9.10?

    - by kayahr
    After some days of work (Using suspend-to-ram during the night) I notice I loose more and more available memory. Even when I close all applications the situation doesn't improve. I even went down to the command line and closed ALL running processes except the init process and the bash I'm working in. I unmounted all these ram disks which Ubuntu is using, I even unloaded all modules which could be unloaded. But still "free" tells me that 1 GB of RAM is used (without buffers/cache). In "top" there is no visible process which occupies all this memory. The only way to free the memory is restarting the machine. How can I find out where I lose all this memory? Is there a known "suspect" who can cause a problem like this? I'm using Ubuntu 9.10 64 bit on a Dell Latitude E6500 (4 GB RAM) with the latest closed-source nvidia driver and Gnome with Compiz. The applications I use most of the time are firefox and eclipse. Any hints how I can find the problem? I'm not a kernel hacker so if the solution is patching the kernel or something like that then I might be out of the game...

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • BizTalk/MQ - DCOM was unable to communicate with the computer xxx using any of the configured protocols

    - by NealWalters
    I've read the other questions on this same error, but don't see a close match to this scenario. We got 18 of these last night between 12:17:13 and 12:39:37 on Win 2008/R2. It caused us to lose connectivity between BizTalk 2010 and WebSphere MQ machine. All our BizTalk machines (Prod, QA, Train, etc...) got the messages at the roughly same time and in the same quantity (about 18 of them). Computer xxx is the WebSphere MQ machine. What could cause this in the middle of the night? The servers are configured and running in Prod for a couple of years. There is no Win Firewall running, and servers are practically on the same rack. Could a run-away or 100% utilized CPU on the WebSphere MQ cause this issue? What else could cause it? BizTalk did NOT auto-recover from this situation. The above was followed by thousands of this message in the BizTalk event logs: The adapter "MQSeries" raised an error message. Details "Error encountered on Queue.Get Queue name = MyQManager/MyQueueName Reason code = 2354.". We restated BizTalk host instances, and it did not come back right away. It seemed we had to stop host instances for about two minutes, then start them.

    Read the article

  • Adobe e-book reader ("Digital Editions") not downloading on Mac OS X

    - by doug
    i recently bought a couple of books from ebooks.com, which i thought would be ordinary pdf files. After paying for them, and downloading them, you learn that while they are pdf files, they come with a lot of DRM baggage. The most conspicuous is that you can only view these files using an Adobe ebook reader called Adobe Digital Editions. (Note: this is not the ordinary Adobe Acrobat Reader, or anything close--it's a dedicated app for reading DRM-laden files. Fine--i'll know better next time. Still, i paid for these books and there's only one way i can actually read them, which happens to be an App that i seem to be unable to download. Here's the error message i get: "Couldn't write the application to the hard disk. Please verify the hard disk is available and try again" I've tried on several different browsers. My rig is a MBP, OS X 10.6.2. I've also checked the Adobe boards and this doesn't appear to be a known issue, nor could i find anything on their discussion forums. And just to be sure, i've checked my hard disk--no problems, plenty of space, and i have no problem, nor have i ever downloading other apps.

    Read the article

  • Errors generating gpg keypair

    - by Evan Lynch
    I posted this originally on stackoverflow, but was told that it was offtopic and this would be the better place to post it, so I am reposting it here and deleting my original topic. I have a rather old PGP key, but I've long ago lost the private key for it, so I'm trying to generate a new key with GPG on Windows 7. While it technically generates the key, GPA crashes every time I generate the keypair. I've tried this four times now and just downloaded what appears to be the latest version of Gpg4Win and am still receiving this problem. A comment on my original post informed me that GPA crashes is not a very good description of the problem, but unfortunately I can't do much better than that: all it tells me is "gpa.exe has crashed and will close now", I don't get an error dump or anything. Is there anything I can do to fix this, or is this just a bug in the latest version of Gpg4Win? Here are the specs of GPG that I'm using: GPA 0.9.4. GnuPG 2.0.22. My operating system is Windows 7 64 Bit, and I have 5 GB of RAM. Also, I was told to try generating the keypair on the command line but can't find any documentation for how to do this in Windows 7. If anyone could link me to current documentation for this, that would be a good workaround for solving this problem.

    Read the article

  • How to correctly configure DNS for Icelandic domains and Plesk

    - by Leonard Challis
    I have a domain registered with ISNIC (domain.is). They only let you set nameservers that pass their requirements. I've been told it's this requirement that I need to fix: Nameserver must be consistently registered in DNS, i.e. its own A resource record must be available and a corresponding PTR resource record as well. I allocated two new IP addresses from my server host and at that point set their PTR records to ns0.domain.is and ns1.domain.is. After that I created two A records for that domain in Plesk, again ns0. and ns2.domain.is with their respective IPs. Next, I went to the ISNIC page to register my nameservers, along with their IP addresses I'd allocated and this worked perfectly for both without error. So the final job was to set the nameservers for the domain via ISNIC's control panel, however when I try, I'm getting this error: Test results for "NS0.DOMAIN.IS": The nameserver ns1.vps123.vpsprovider.com is not consistently registered in DNS (ns1.vps123.vpsprovider.com => 1123.123.123.123 => vps123.vpsprovider.com) The nameserver ns0.vps123.vpsprovider.com is not consistently registered in DNS (ns0.vps123.vpsprovider.com => 1123.123.123.123 => vps123.vpsprovider.com) The nameserver ns0.DOMAIN.IS is missing from the NS record set for DOMAIN.IS Test results for "NS1.DOMAIN.IS": The nameserver ns1.DOMAIN.IS is missing from the NS record set for DOMAIN.IS The nameserver ns0.DOMAIN.IS is missing from the NS record set for DOMAIN.IS This is really at the limits of my DNS knowledge I'm afraid. It feels like I'm close but maybe missing a vital part, linking the nameservers in Plesk or something?

    Read the article

  • What is the sysadmin's dream network printer? 6-8k pg/mo. Xerox, OkiData, Lexmark and HP are all fail

    - by Jacob
    How do I find out what printer brand and/or type doesn't suck? This information is hard to find and manufacturer's websites won't reveal any issues with certain printers. After 10 years of dealing with network shared printers, I can't say that I have been impressed with any of the printer brands I've seen. Brother's little laser MFPs have been close to ideal for low volume, but that is it, period. OkiData, Lexmark, HP, Xerox solid ink printers, they all sucked in one way or another. Currently I'm looking to replace a Xerox ColorQube 8570 because it fails to print on a regular basis. Sometimes it doesn't even boot VxWorks fully - it just hangs at 2% or whatever. I've used Xerox 8860MFPs and they sucked just as bad. I won't talk about ink jets here, that's most likely not what I'm looking for. We currently spend about $4k on paper and ink per year for this printer at up to 6-8k pages per month, letter, mostly black and white, low color usage. I want the printer to feed paper correctly, not crash and burn when a PDF isn't according to its taste (my favorite Xerox problem here) and with decent drivers for Windows and OS X. Print quality is not of the utmost importance but paper does get sent to customers.

    Read the article

  • Apache reverse proxy with VirtualHost not serving a page

    - by Mr Aleph
    I have an Apache reverse proxy set to move requests to a Tomcat Applet. The config is similar to: <VirtualHost 100.100.100.100:80> ProxyPass /AppName/App http://1.1.1.1/AppName/App ProxyPassReverse /AppName/App http://1.1.1.1/AppName/App </VirtualHost> I also have a page called summary.html that exists on 1.1.1.1 as: http://1.1.1.1/AppName/summary.html When I browse directly to it I have no problem viewing it, however if I try to get there via the reverse proxy I get a blank page. Wireshark shows me a 503, but this one is coming from the Apache reverse proxy (IP 100.100.100.100) and not the Tomcat (IP 1.1.1.1). Should I add http://1.1.1.1/AppName/ to the config? How? I tried it but I get a blank page, however this one shows on the URL bar of the browser the internal IP of the Tomcat, so, no go. Help is appreciated. Thanks. EDIT: This is the dump from Wireshark: GET /AppName/ HTTP/1.1 Host: 100.100.100.100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Cache-Control: max-age=0 Accept-Language: en-us Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 404 Not Found Date: Tue, 30 Jan 2012 09:08:51 GMT Server: Apache Content-Length: 1 Connection: close Content-Type: text/html; charset=iso-8859-1

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • nfs mount with nfs 3

    - by rahrahruby
    I am running CentOS 6.4 Kernel version 2.6.32-358.23.2.el6.x86_64 #1 SMP and have the following nfs info: nfs-utils-lib-1.1.5-6.el6.x86_64 nfs4-acl-tools-0.3.3-6.el6.x86_64 nfs-utils-1.2.3-36.el6.x86_64 and am trying to mount an nfs volume with nfs3. I have the following line in my fstab: 172.16.11.87:/volume1/web /home/nas nfsver=3 rsize=8192,wsize=8192,timeo=14,intr(no_root_squach) When I run nfsstat it still shows the client as nfs4 Server rpc stats: calls badcalls badauth badclnt xdrcall 0 0 0 0 0 Client rpc stats: calls retrans authrefrsh 1988817 6 1988818 Client nfs v4: null read write commit open open_conf 0 0% 36943 1% 21606 1% 401 0% 392369 19% 375986 18% open_noat open_dgrd close setattr fsinfo renew 0 0% 0 0% 387945 19% 22904 1% 3 0% 2914 0% setclntid confirm lock lockt locku access 1 0% 1 0% 0 0% 0 0% 0 0% 97856 4% getattr lookup lookup_root remove rename link 613996 30% 29888 1% 1 0% 1248 0% 253 0% 414 0% symlink create pathconf statfs readlink readdir 26 0% 226 0% 2 0% 3 0% 0 0% 3825 0% server_caps delegreturn getacl setacl fs_locations rel_lkowner 5 0% 0 0% 0 0% 0 0% 0 0% 0 0% exchange_id create_ses destroy_ses sequence get_lease_t reclaim_comp 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% layoutget layoutcommit layoutreturn getdevlist getdevinfo ds_write 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% ds_commit 0 0%

    Read the article

  • Remote Desktop Zooming

    - by codeulike
    Using Remote Desktop from a device with a hi-res screen (say, a Surface Pro) is decidedly tricky - as everything displays 1:1 scale and so looks tiny. If the machine you are remoting into runs Server 2008 R2 or later, you can change the dpi zooming setting (see here). But for older hosts, that doesn't work. Using normal Remote Desktop, you can connect with a lower resolution, say 1280x768, and turn on smart-sizing. However smart-sizing can scale down (to display a huge desktop in a small area) but does not seem to scale up (to display a small desktop in a big area). Using the Windows 8 Remote Desktop App, you can zoom - but you cannot set the default resolution of the host. What I want is a lower resolution in the host, scaled up to fit my screen. So both of those are close to what I want, but dont quite work. So question is: Does the Remote Desktop App allow screen resolution to be set somehow? Is there some other Remote Desktop client that can handle zooming better?

    Read the article

  • How can I get Windows 8 to automatically disable touch when I am using my Wacom pen and turn it back on when I am not

    - by Robert
    I have an HP convertible tablet computer which I just upgraded to Windows 8. The problem (which existed under Windows 7 as well) is that this tablet has both a capacitive touch screen (with multi-touch) AND a wacom-type tablet built in to the screen that works using electro-magnetic resonance with the provided stylus. My Use Case: Most of the time I am happy using my fingers and the touch interface for navigation and whatnot. However, when I want to get down to serious note-taking/drawing, I want to use the wacom functionality. The problem is that any comfortable writing position has me resting my arm/hand on the screen, which activates the touch technology (despite supposed palm-detection algorithms) and completely screws up my input paradigm. My Ideal Solution: Ideallly, since wacom technology senses when the pen is "close" to the screen, I would love to have touch be automatically disabled whenever the wacom pen is detected, and turned back on when it is out of range. this would allow me to seamless switch between the two input methods, and since I NEVER want to use both at once would work perfectly for me. An acceptable alternative: As a next best option, It would be great to be able to turn off the touch functionality (leaving the wacom in place) whenever I entered specific apps (e.g. OneNote, Photoshop, Gimp, Pencil, etc.) and then have it turn back on when I left that app.... As a worst case at least lets me use my PC option: If I could create a shortcut (tile or otherwise) that flips the touch on and off without going all the way through the nested computer settings, that would be better than nothing. Thanks in advance for the help with 1 or more of the above.

    Read the article

  • How to implement a virtual server running Ubuntu inside a fileserver in windows?

    - by user541445
    I work in a company that has some limitations regarding their budget. They need client/server aplication. I can code the aplication, I've made mini tests on primordial applications that work. The thing is that they only have fileservers, and the application they need must be concurrent compliant, so the database must be in their local network (Fileserver is the only choice). So far, I have explored almost any option available, starting with: Desktop databases. Access (We have a license)(But concurreny is just not effective, besides it's a windows software, yuck). Sqlite (Nice, but since the information they manage is a lot, I've performed some concurrency tests with INSERTs and doing SELECTS at the same time). It fails, somehow it just stops inserting. Open office Base (I dismember a base office only to see that it was a file mode HSQLDB). I've tried, not concurrent at all. Etc (You name the Open source Desktop Database manager, and yes, I've tried that one.) Server databases Call me a stubborn person, but reading some server databases documentation say that their databases will work in a file mode. I've tried a lot of products. Postgres, MySQL, FireBird, H2, Derby, Oracle Express, IBM DB2 Express, etc. So, I really need a hand here, I've been doing this install/delete/depression thing with a lot of databases for 3 weeks, until I got with a crazy idea and I just came here to express it. So, my question comes down to this: Installing a light virtual OS software like Virtual PC and then installing in there a Server OS like an Ubuntu server inside that virtual software will work? Will it work 24/7 or when I close the virtual pc software? Will this work in a fileserver? Any suggestion, answer, critic to the place I work, crazy new concurrent database that will work in fileserver will be most than welcome.

    Read the article

  • Do eSATA HDD docking stations have a capacity limit?

    - by Michael Kjörling
    I'm looking at perhaps buying an eSATA docking station to be able to easily plug in and unplug hard disk drives, particularly but not necessarily only for backup purposes. Note: This is not a hardware shopping recommendation question. Please don't vote to close it as such. Looking at different models, I find for example this page detailing the Deltaco SI-7908SUS which specifically states "storage capacity: 1.5 TB" as well as "pictured hard disk not included, only for illustration". A customer review specifically mentions that it does not work with 3 TB drives, although does not go into any detail such as OS, drive model, etc. From a brief glance, the vendor's web site does not appear to say either way. Then there is the quite similar Deltaco SI-7908B3 which boasts on the box "all 2.5" and 3.5" HDD/SSD compatible". My question is: Why would what basically amounts to a SATA/eSATA adapter have any say in what storage capacity devices are supported? Does it? Assuming the OS supports the full capacity of the drive, why should introducing another (not even a different, really) connector change anything? Bonus question: Might it make a difference if the docking station exposes multiple interfaces (such as in the case of for example the SI-7908SUS exposing USB 2.0 and eSATA)? (I still think it shouldn't, but it'd be nice to have it confirmed.)

    Read the article

  • Monitor displays at 1024x768; scrolls screen for higher resolutions

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks

    Read the article

  • How can i export a folder of Firefox Bookmarks into a text file

    - by Memor-X
    I want to clean up my bookmark folders by getting rid of duplicated/links. i have created a program which will import 2 text files which contains a URL like this File 1: http://www.google/com http://anime.stackexchange.com/ https://www.fanfiction.net/guidelines/ https://www.fanfiction.net/anime/Magical-Girl-Lyrical-Nanoha/?&srt=1&g1=2&lan=1&r=103&s=2 File 2: http://scifi.stackexchange.com/ http://scifi.stackexchange.com/questions/56142/why-didnt-dumbledore-just-hunt-voldemort-down http://anime.stackexchange.com/ http://scifi.stackexchange.com/questions/5650/how-can-the-doctor-be-poisoned the program compares the 2 lists and creates a single master list with duplicate URLs removed. Now i have a few Backup bookmark folders in Firefox which on occasion i'll bookmark all tabs into a new folder with the date of the backup before i close off tabs or do reset my PC. each folder can have between 1000-2000 bookmarks, sometimes there are a bunch of pages which keep getting bookmarked, ie, i have ~50 pages on the Magical Girl Lyrical Nanoha Wiki on different spells, character and terminology which i commonly look back on. I'd like to know how i can export a bookmark folder so i have a list of URLs similar to what i use in my program

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Auto-rotate rotated images with mogrify

    - by Frank Presencia Fandos
    Some of my images have been taken rotated but kept this data. The problem is that, when using mogrify to convert them from JPG to png, that data seems to dissapear. For showing this problem, I think the best is to show the script and an screenshot. Script with the code. Put it in a text file, give it execution permission, double click, run (from terminal if you wish) and wait a while. All the JPGs in that folder will be converted to png. #! /bin/bash echo "Converting JPG to png. Please don't close this window." mogrify -alpha on -format png *.JPG mogrify -alpha on -format -alpha on png *.jpg It works great and adds an alpha channel. This is personally useful when I edit them later, not to add the channel individually. Now the screenshot that illustrates the problem: As you can see, the original ones' (JPGs) preview is right, the modified preview is wrong, the Shotwell rendering is right and the GIMP edit is wrong and didn't even say the image was rotated, as it uses to do with other images. How can I edit my script to preserve the orientation?

    Read the article

  • problems installing mysql and phpmyadmin to localhost

    - by Joel
    Hi guys, I know there have been many similar questions, but as far as I can tell, most of the other people have gotten further than I have... I'm trying to get a WAMP setup happening. I've got PHP and Apache running and talking to each other. PHP is in c:\PHP Apache is in it's default program files folder. mySQL is in it's default install location. I have localhost setup at D:\public_html\ I'm able to navigate to localhost and see html and php files. But I have a simple mySQL test file: <?php // hostname or ip of server (for local testing, localhost should work) $dbServer='localhost'; // username and password to log onto db server $dbUser='root'; $dbPass=''; // name of database $dbName='test'; $link = mysql_connect("$dbServer", "$dbUser", "$dbPass") or die("Could not connect"); print "Connected successfully<br>"; mysql_select_db("$dbName") or die("Could not select database"); print "Database selected successfully<br>"; // close connection mysql_close($link); ?> When I try and open this, I get "could not connect" Now, I haven't even created a database yet, because I can't log into mySQL with phpmyadmin-so I think I've done something wrong in my mySQL install because they aren't talking to each other. I guess my main question is how do I first create a database in mySQL to be sure I have even installed it correctly?

    Read the article

  • Certificate enrollment request chain not trusted

    - by makerofthings7
    I am working on a MSFT lab for Direct Access, and need to create a Web certificate. The instructions ask be to do the following: On EDGE1, click Start, type mmc, and then press ENTER. Click Yes at the User Account Control prompt. Click File, and then click Add/Remove Snap-ins. Click Certificates, click Add, click Computer account, click Next, select Local computer, click Finish, and then click OK. In the console tree of the Certificates snap-in, open Certificates (Local Computer)\Personal\Certificates. Right-click Certificates, point to All Tasks, and then click Request New Certificate. Click Next twice. On the Request Certificates page, click Web Server, and then click More information is required to enroll for this certificate. On the Subject tab of the Certificate Properties dialog box, in Subject name, for Type, select Common Name. In Value, type edge1.contoso.com, and then click Add. Click OK, click Enroll, and then click Finish. In the details pane of the Certificates snap-in, verify that a new certificate with the name edge1.contoso.com was enrolled with Intended Purposes of Server Authentication. Right-click the certificate, and then click Properties. In Friendly Name, type IP-HTTPS Certificate, and then click OK. Close the console window. If you are prompted to save settings, click No. In production, our company has overridden the Web Server template and it doesn't seem to be issuing certificates with the full CA chain. When I look at the issued certificate properties then both tiers of the 2 tier CA hierarchy are missing. How can I fix this? I'm not sure where to look outside the GUI.

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >