Search Results

Search found 11042 results on 442 pages for 'side'.

Page 331/442 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Getting 2560x1600 out of an ATI Radeon HD 4670 on Windows 7

    - by Alexey
    Greetings, I've got a Dell Studio XPS 1640 laptop (with an ATI Radeon HD 4670 graphics card) running Windows 7, and just bought a Dell 3007HPC 30-inch monitor for it. I'm trying to figure out how to get the full 2560x1600 experience out of this setup. Here's what I've done so far: Plug in using an HDMI cable and an HDMI--D-DVI converter on the monitor side. Open up Screen Resolution. Maximum supported setting is 1920x1080. Tried that (several times) - sometimes it doesn't work at all (blank screen); other times, it only shows the first 1280x800 pixels on the bigger screen. Tried using the Catalyst control center - played with various settings there, couldn't get the screen to show anything interesting. Tried using PowerStrip to set a custom resolution, again, no luck. Spoke to a Dell Preferred Custom Support guy for about an hour before giving up. He remote-accessed my computer, and told me that (1) The maximum supported resolution for XPS 1640 is 1920x1080, and (2) 'it seems to be working from where he sees it, must be a connection issue'. None of this has helped. Does anybody have ideas? Should I be using a different cable set up? Am I using Powerstrip wrong?

    Read the article

  • New hard drive for backup? [closed]

    - by glaeven
    I have come to realize that I need another external drive to use with my MacBook Pro. I currently have a 1TB WD MyBook Essential that I have been using for about a year and a half. I have it currently partitioned into two drives, one for backup (I named it Leonov) and one for movies, TV shows and other large files I don't need very often (I call that side Discovery One). I use Time Machine for backups since it is completely automated and I can restore from it without much trouble (I have had to at least three times now). As of now, Leonov is full enough that every backup deletes an old one and Discovery One is approaching it's limits. I would like to get a new drive and move one of the sides to it. What are some reliable, external (~1TB) drives for under or around $100? Would it be easier to move the movies (et al.) or the backups to the new drive? I also feel like I should say that all of my important documents (for school and the like, just not my music) are also synced to Dropbox as another form of backup and access.

    Read the article

  • Selective Pointer device remapping in linux

    - by user6368
    I just got an HP 2710p (hp tablet, with digitizer), and I've played around with linux for a while now, and thought I would go ahead and install it. Everything works fine, excepting normal tablet functions, which is to be expected. I'm working on the screen rotation, and there are on-screen keyboards, etc, but I'm having issues with the stylus. I can tap and left click with the stylus as normal, but the side button (which in windows functions as a right mouse button) appears as a 'button 2' to xev (a middle/scroll wheel button). I can switch 'button 2' and 'button 3' universally using xmodmap, but I'd like to do so exclusively for stylus so I don't screw up regular pointing devices. Altering xorg.conf (which is surprisingly bare) with the recommended sections (adding sections for each of the stylus buttons) does nothing. I'm running crunchbang, which is an ubuntu/debian varient with openbox as the windows manager. Thanks Also, as a seperate note, does anybody know how to detect when I rotate and/or latch the lid shut? I was thinking maybe I could run a script to switch the buttons when I close it, but I can't find any information.

    Read the article

  • Update git on mac

    - by Meltemi
    I can't remember how I installed git a while back....but now it's living in /usr/bin/git and needs to be updated. I don't care how (pre-compiled or build my own) but what I don't want is another version existing somewhere else. i vaguely remember curl(ing) down the source & compiling it. but not positive. anyway, what's the easiest way to keep Git up-to-date under Mac OS X? Side question: I'm not that familiar with git. once it's installed is it ENTIRELY contained within its directory? so, in my case, everything about git on my machine (excluding the actual code repositories of course) is in /usr/bin/git/ ? If so then can I just move git around with a simple mv -R /usr/bin/git /opt/git? Then update my $PATH and everything should work as before? if so then i supposed i could just install again by any method and to any directory...and then move the new one into /usr/bin replacing the old version?!? Or is this bad?

    Read the article

  • Alternatives to ntbackup

    - by Chris J
    Is anyone aware of any good (for a given value of "good") alternatives to ntbackup? There's a few quirks with ntbackup that I still find odd, and occasionally (and for no obvious reason) ntbackup just doesn't back up - usually complaining about that the wrong tape's inserted even though it's been told to just use the tape that's in the drive regardless. I've experiemented with cygwin's tar and cpio and Win32 ports of these utilities, but I've not had any luck getting them to see the tape device (so if someone does use these utilities to write to a tape device, also be interested to know how). Essentially all I'm looking for is a reliable program that I can tell to back-up a list of locations to the tape, and not to care about anything like formatting tapes, or sticking volume labels on them (and conversely then makes it fairly straightforward to restore off as well). On the flip-side, I don't want something that will attempt to manage our tapes. Don't get me wrong here -- ntbackup can do the job, but its quirks are making look at possible alternatives. Any suggestions? If it's open source, even better.

    Read the article

  • How to get gigabit network speeds on Windows XP?

    - by JB
    We've just installed gigabit switches at work, and things on the Linux side are going well. Our linux boxes, which use a Intel Corporation 82566DM-2 Gigabit nic (according to lspci), consistently get over 900 mbits/sec: iperf -c ipserver ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.40.9 port 39823 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 929 Mbits/sec We have a bunch of Windows XP 64-bit machines that use Broadcom NetXtreme 57xx cards. I spent around a day trying to get equivalent speeds on them, but couldn't get above 200 Mbits/sec. I noticed the Windows iperf tests said that the TCP window size was 8 Kb by default (as opposed to 16 Kb on Linux, so I modified my test to reflect that. Still no love. I went to Broadcom's site, downloaded the latest drivers for the card and installed. Still no love. However, finally, I tried a 64 Kb window size with the new drivers, and finally an improvement! $ iperf -c ipserver -w64k ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ 3] local 192.168.40.214 port 1848 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 933 MBytes 782 Mbits/sec Much better, but still not really taking advantage of the full capabilities of the network. If the Linux box can reach 950 Mbits/sec consistently, this box should be able to as well. Also, if you're wondering about the medium, this is over the same cable...I'm switching back and forth. Any suggestion or ideas would be really welcome. Thanks!

    Read the article

  • Debugging "clogged" TCP connections

    - by Nikratio
    I'm having trouble with an internet connection that seems to randomly "freeze" arbitrary tcp connections. The connections stay established, but no data is coming through. When this happens, netstat still shows the connection status as ESTABLISHED on both the local computer: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 53 192.168.0.10:41129 173.255.235.238:143 ESTABLISHED 8219/gnutls-cli on (79.31/13/0) ..and the remote server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 0 173.255.235.238:143 68.5.174.98:41129 ESTABLISHED 5303/imapd off (0.00/0/0) However, it seems that no data at all is transferred. If I run strace on the local and remote process, both just show a repeating sequence of select calls (with different fds of course), e.g. select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) The internet connection overall does not seem affected, I can still establish new connections to the same service on the same server without any problems. However, the affected local applications seem to be unaware of the problem and just hang. When I look at a packet capture of this connection on the client side, the last thing that happens is that the client transmits some data, then nothing happens for about 1100 seconds, and then several TCP Retransmission requests go out, with intervals increasing from 4 seconds to 130 seconds. No activity is captured after that. After about 10 minutes, the connection on the remote end disappears from the netstat (I wasn't able to catch any intermediate state), but still stays ESTABLISHED on the local end. Finally, after some more minutes, the local application aborts with a timeout and disappears from the local netstat output as well. Does anyone have a suggestion of how I could debug this further to find out where the problem lies and how to fix it? Additionaly and/or as a temporary workaround: is is there some way to globally reduce the timeout on client and/or server to reduce the time before the local application aborts?

    Read the article

  • IIS Manager - Connect to Another Server (Win7 to Win2008 server)

    - by Matt
    I am running Windows 7 Ultimate. If I open up IIS Manager, I see a list of "connections" on the left hand side. In previous versions, I would be able to select an option to "connect to another server" or "connect to another machine", but there is no such option visible anywhere here. The only thing in the list is my local machine. Even in the address bar, if I manually type in the server location (\servername, even tried just servername), nothing happens (it just reverts back to my current local computer) The documentation at http://technet.microsoft.com/en-us/library/cc732466%28WS.10%29.aspx seems to imply the very same steps... but there is just no button or menu option anywhere to do this. Am I missing something? I'm not even seeing a grayed out menu option. EDIT: Under the "File" menu, I see 2 options: Save Connections (grayed out) Exit Under the "Connections" pane, I see 1 button, grayed out. When I hover the mouse, it simply says "Up", appears to be usable if I browse into an element in my local computers IIS settings If I right click inside the pane itself, I see Refresh Add website (to the current host) Start Stop Rename Switch to Content View UPDATE: I downloaded and installed the Remote Server Administration tools from http://www.microsoft.com/downloads/details.aspx?FamilyID=7D2F6AD7-656B-4313-A005-4E344E43997D&displaylang=en, and I enabled everything listed under "Remote Server Administration Tools" under "Turn Windows Features On or Off". Still nothing.

    Read the article

  • Where do I define a group policy that will set a users desktop background color to green the first time they log in?

    - by Tyler
    Servers: W2k8 R2 x64 Desktops: Win7 Pro x64 Our current group policy uses a custom ADM file to define certain properties of the desktop (Background Image (centered), Background Color is green (00 74 00)). This policy works for us, but the down-side is that policies defined in our custom ADM are only applied after a GPUpdate /Force is applied. We would like these desktop theme settings to be applied the first time the user logs onto the computer. I've been working on a new policy that forces the computer to wait for the network when the user logs on to handle folder redirection. The reason for writing the new policy was to resolve the issue that a user needs to run GPupdate /Force the first time they log in, so it doesn't make sense for me to implement the new policy if there is still something that requires GPUpdate /Force to get the user in the state that we want them. I've moved the setting for background image out into Admin Templates- Desktop- Desktop- "Desktop Wallpaper" so this is now being set properly when the user first logs in. Now I'm left with a black background until I force a group policy update. I have tried to play around with setting a default "Theme" and had limited success; this was not reliable enough to call a solution. I suppose I could set the background color with a script? Any thoughts? It feels like I'm missing something obvious, or that this should be much easier than it is.

    Read the article

  • Ubuntu 11.10 ATI Drivers vesa park

    - by Matthias
    This is probably not an issue, from all I can get it seems my hardware and drivers are properly installed. However when I go to system settings - system info - graphics. I get Driver: VESA:PARK. Experience: Standard. my graphics card is a: Ati Mobility Radeon HD 5470 512MB. I am pretty sure it's not a same-die GPU since there is a fan exhaust at the side of my laptop which I presume is the exhaust for the GPU... I have no clue whatsoever what this means. I installed the ati drivers first using the 'additional drivers' method. However I also decided to look a manual installation up via the terminal since I've had problems before with Ubuntu and ati cards. I used wget and something among the lines of sh dpkg -i. I can recall exactly, I took them from another stackoverflow answer. Anyway, it seems everything is installed properly since it shows up with these commands: sudo lshw -C video fglrxinfo however the first command seems to detect hardware, not the driver per se, although the driver is probably needed to detect the hardware anyway which would indicate its properly installed. I am still not sure about that VES:PARK thing though. I'd like to know what it means.. Also, if someone happens to know a good way of testing if the gpu is connected/being used...some sort of benchmark maybe...I'd like to hear it. P.s. I can find my way around in Ubuntu but I would probably still be considered a rookie by more experienced users.

    Read the article

  • Widespread misinterpretation of DNS rules in resolving wildcards

    - by Dominic Sayers
    [EDITED to add: This problem has gone away on its own. I believe Cloudflare's name resolution may have been to blame. See my own answer below] Here is a snippet of my zone file *.example.com. 300 IN CNAME proxy.herokuapp.com. foo.example.com. 300 IN A 111.111.111.111 If I dig @8.8.8.8 foo.example.com I get the answer I expect: ;; ANSWER SECTION: foo.example.com. 30 IN A 111.111.111.111 The same is true of all other public DNS servers I've tried. However, when I try to set up a check with Pingdom to a URL on foo.example.com it instead sends the traffic to my Heroku app referenced by the *.example.com RR. The same is true of checks set up on New Relic, Errplane and traffic generated by the Heroku app itself. So on the one side, all public DNS servers interpret the zone file one way. Yet four service providers all interpret it a different way, one that differs to the standard suggested by RFC 4592. My question is: are these reputable, mature service providers all wrong? Or is it little me?

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • printing parallel over ethernet cable

    - by Crudler
    I have a bit of an interesting challenge :) I have a machine with a parallel printer output, i want it to be able to print instead to a printer in a different room and i know that parallel isnt great over big distances. i found this: http://www.amazon.com/over-Cat5-Extension-Cable-Adapter/dp/B002WJ9S6Y%3FSubscriptionId%3DAKIAINHICTCYYZGJWT4Q%26tag%3Dusbprintercables.net-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB002WJ9S6Y which will let me connect over cat5, but its usb to cat 5. my machine can only output on parallel (its not a computer) so what i was thinking of getting is a parallel(f) to usb and usb to parallel (M) for either side i.e. machine - parallel - usb - cat5 - usb - parallel -printer just seems a bit messy :) suggestions? another thing i would like to try is to get rid of the old school parallel printer and instead use a network based multi function. would this be possible? i.e. machine - parallel -usb - cat5 - ethernet print server - network printer this might be rougher because the machine cannot "know" that we are using a network printer. it can ONLY print to LPT1 Thanks!

    Read the article

  • Is there a way to tag MP3 the way you can do with Photos in Live Photo Gallery or Picazza?

    - by bangoker
    I really do love the way you can tag pictures in Windows Live Photo Gallery. I find it incredibly useful to be able to tag the same picture with the tag "Cancun" and have it been automatically included when I look for the tag "Beach", "Trips" and "Fun Pic", since its a child tag of the former and also has a tag for the later. I also like that I can look for rating in the pics. On the other side, tagging MP3 has always been a pain in the ass, because I just find it to hard to classify music! Is it pop? techno? techno-pop? rock? indie-rock? indie-post-classic-pop-techno? I hate how ID3 just have one tag for genre, so I've tried tricks like putting all the genres I think it fits into, and then having lists in Winamp (which I dont use anymore) that filter out words in the ID3 tags (ie, Tag: Genre, Contains: Rock = rock list). But then, what about moods? I want to be able to tag my songs in genres and in moods, you know, happy, concentrate/work, party, romantic (wink wink), etc. Is there any way to do this, preferably in a way in which tags could carry on to other music players? Thanks

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • Why do most songs in my media collection play twice? - Corrupt media?

    - by Dean
    Problem: Whether I'm playing the media with Rhythmbox on Ubuntu, Winamp on Windows, or my Nokia N95's media player, most of my audio files (OK, maybe only 40%) play twice. Info: I have a 500GB external 2.5" WD HDD, with a 150GB primary FAT32 partition labeled MUSIC. Inside this, I have about 500 folders containing about 10,000 MP3/WMA/M4A/WAV files. I manage the drive using Ubuntu 9.10, and frequently copy data to/from it using RSYNC, or on windows, TotalCopy. The visual output is different in each media player, but it behaves as if the 1 MP3 has the same song on it twice, and as soon as it ends it begins again. Winamp shows that the song goes for 2x as long as it should, The N95's media player shows the progress bar off the right-hand-side of the screen when it begins playing (then jumps back to the left, then continues along...). Rhythmbox doesn't show me how long the song is, nor does the progress bar move along the screen. Plea: It seams to me somewhere along the lines my collection has become corrupt... but where? And how? and please someone tell me I can fix it!! TIA, Dean.

    Read the article

  • Macs don't connect to wifi access point but PCs will

    - by Josh
    So, as a side project I'm going to try and figure out why the wifi APs in my building exhibit the following behavior: - They typically allow all types of computers to connect without issues - Sometimes Apples can't get an IP address but will still connect to the AP's signal - Less often, PCs can't connect to the wifi (same as above - yes signal, no IP addy) - Don't let Raiders fans on no matter the time of day! My first thought was that the DHCP leases were all taken up when the Apples would try to connect, and it was just their unlucky timing, but I would then try to log on with a PC that had a new, unleased MAC address and it would work... Could this be something to do with interoperability between an apple wifi card, and the APs? Different parts of the DHCP lease being taken up first? The fact that the Seattle Mariners might actually be good this year?? If this hasn't used up everyone's patience (with my crappy sports jokes), something else I could use some help with: - We don't have the model or type of AP - This is because there is no documentation available for them, and they literally look like small white boxes with no writing on them. Also, the company that installed them is out of business, so the situation might be that no docs will ever be on the way. -- Do you guys have any ideas on how to figure out what we have? Thanks as always for all the help, and I'm looking forward to the day when I know enough to start contributing back to the site, Josh

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • Fix for php 5.3.9 libxsl security "bug" fix

    - by Question Mark
    just this morning i updated my debian server to php 5.3.9 , change log (last item in list) has a fix for this bug and now when running any hosted site using XSL transforms i get: Warning: XSLTProcessor::transformToXml(): Can't set libxslt security properties, not doing transformation for security reasons I'm not using any <sax:output> tags in my xslt at all. Does anybody have any information on this, current chatter about it is thin, so i'm i little lost. Using the suggestion about switching ini settings on and off either side of -transformToXml(): ini_set("xsl.security_prefs", XSL_SECPREFS_NONE) or $xsl->setSecurityPreferences(XSL_SECPREFS_NONE) brings me back to the same error Many thanks. Progress: - Upgrading libxml and recompiling libxslt against the new version was a good suggestion, though has not fixed the issue. - Compiling the latest php5.3 snapshot does not fix the issue. Solution: I'm unsure what actually solved this, very sorry for anyone else having the same problem. firstly i upgraded libxml, then applied a few patches, then went into php source for the xsl parser and added some debugging and a few tweaks, after a few compiles getting the configure args right the error went away and wasn't reproducible. I would definitely recommend upgrading libxml as Petr suggested below and then grabbing the latest snapshot from php.net.

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • apache2 mysql authentication module and SHA1 encryption

    - by Luca Rossi
    I found myself in a setup on where I need to enable some authentication method using mysql. I already have an user scheme. That user scheme is working like a charm with MD5 password and CRYPT, but when I turn to SHA1sum it says: [Fri Oct 26 00:03:20 2012] [error] Unsupported encryption type: Sha1sum No useful debug informations on log files. This is my setup and some info: debian6 apache and ssl installed packages: root@sistemichiocciola:/etc/apache2/mods-available# dpkg --list | grep apache ii apache2 2.2.16-6+squeeze8 Apache HTTP Server metapackage ii apache2-mpm-prefork 2.2.16-6+squeeze8 Apache HTTP Server - traditional non-threaded model ii apache2-utils 2.2.16-6+squeeze8 utility programs for webservers ii apache2.2-bin 2.2.16-6+squeeze8 Apache HTTP Server common binary files ii apache2.2-common 2.2.16-6+squeeze8 Apache HTTP Server common files ii libapache2-mod-auth-mysql 4.3.9-13+b1 Apache 2 module for MySQL authentication ii libapache2-mod-php5 5.3.3-7+squeeze14 server-side, HTML-embedded scripting language (Apache 2 module) root@sistemichiocciola:/etc/apache2/sites-enabled# dpkg --list | grep ssl ii libssl-dev 0.9.8o-4squeeze13 SSL development libraries, header files and documentation ii libssl0.9.8 0.9.8o-4squeeze13 SSL shared libraries ii openssl 0.9.8o-4squeeze13 Secure Socket Layer (SSL) binary and related cryptographic tools ii openssl-blacklist 0.5-2 list of blacklisted OpenSSL RSA keys ii ssl-cert 1.0.28 simple debconf wrapper for OpenSSL my vhost setup: AuthMySQL On Auth_MySQL_Host localhost Auth_MySQL_User XXX Auth_MySQL_Password YYY Auth_MySQL_DB users AuthName "Sistemi Chiocciola Sezione Informatica" AuthType Basic # require valid-user require group informatica Auth_MySQL_Encryption_Types Crypt Sha1sum AuthBasicAuthoritative Off AuthUserFile /dev/null Auth_MySQL_Password_Table users Auth_MYSQL_username_field email Auth_MYSQL_password_field password AuthMySQL_Empty_Passwords Off AuthMySQL_Group_Table http_groups Auth_MySQL_Group_Field user_group Have I missed a package/configuration or something?

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • Convert a cassette tape recording to digital format

    - by Electric Automation
    Has anyone been successful with transferring audio cassette tape recordings to a digital format? I would like to preserve old cassette tape recordings of my grandparents to some digital format: MP3, WAV, etc... The quality of the tapes are mediocre. I think I can handle the quality restoration but getting the audio from tape to digital is my question. Below is a list of the hardware that I can work with: Cassette Deck: I have a Technics stereo cassette deck model RS-B12. It has separate left and right IN and OUT RCA type jacks on the back. In the front it has a headphone phono jack, plus left and right mic input phono jacks. On the computer side: -I have a Windows Vista PC with no additional software other than what came with the machine from Costco. No sound editing software that I can see. There is no sound card on the PC. On the front panel there is a mini-phono mic input jack and there are several different types of in/out mini-phono jacks on the back. In addition, USB and Firewire. I also have access to a new (2009) iMac with a mini-phono input jack for a powered mic or other audio source and GarageBand that has come with the computer. In addition, USB and Firewire. What are my options for getting these cassette recordings into a digital format? Whats the best format? What sort of wires would I need and will I want to utilize the USB or Firewire or can I simply use the audio inputs on the PC (or Mac) to receive the audio stream?

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >