Search Results

Search found 21142 results on 846 pages for 'bit manipulation'.

Page 620/846 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • "Unable to initialize module" fileinfo php-pecl-Fileinfo.x86_64

    - by Myers Network
    I have a brand new server server that I am trying to get setup up. This is a 64 bit machine that I can not install "fileinfo" or "memcache". I have uninstalled these and reinstalled them using yum and pecl with no luck. Yum install fine "no error" but then get error when running php. pecl from what I can tell is only installing 32bit. Does not put anything in the lib64 directory. Here is my output from php -v: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.14 (cli) (built: Aug 12 2010 16:03:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Here is some other system info incase you need it uname: Linux server.actham.us 2.6.18-194.26.1.el5 #1 SMP Tue Nov 9 12:54:20 EST 2010 x86_64 x86_64 x86_64 GNU/Linux php -m: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 [PHP Modules] bz2 calendar ctype curl date dbase dom exif filter ftp gd gettext gmp hash iconv imap json ldap libxml mbstring mcrypt mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_sqlite readline Reflection session shmop SimpleXML sockets SPL standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib [Zend Modules] Any help would be greatly appreciated, thanks....

    Read the article

  • Logitech webcam device only recognised by one software, without drivers

    - by Ben Franchuk
    A couple weeks ago I purchased a Logitech webcam at a garage sale; It did not come with any driver DVDs or anything like that. I plugged it in, turned on my computer, and continued work as usual. I did not at the time (and still have not) gotten any drivers for the device. Recently, though, I started up an audio software named as Cubase, only to find that it was picking up audio reads off of... something. I checked my sound card, and everything else plugged into my computer, but couldn't find where in the world this audio was being picked up from. There were no microphones listed in the device managers, and no "unknown devices" or whatever. Everything seemed as it always was. Running out of ideas, i blew an air-horn directly into the general area of the webcam, located directly in front of me. Sure enough, the audio peaked, indicating that the microphone was definitely in the webcam and that Cubase was somehow picking up this audio, even without drivers. The software lists the device as a "Universal USB Microphone". Adobe Audition, Soundbooth, and other audio applications cannot find the device either. Why is it that this one software (Cubase) can use this device without a driver, while every other piece of software on the computer can't? Not even the operating system can recognize it. Windows 7 Professional x64 bit

    Read the article

  • Bash mine script, please

    - by HomelyPoet
    The script, in and of its self, is fairly self-explanatory. Use if You so desire; any and all criticism wouldst be appreciated, as wouldst any suggestions for improvement. First iteration was writ upon OS X 10.5.8 Leopard, current iteration was run upon OS X 10.6.4 Snow Leopard with Safari 5.0.2 (6533.18.5). Also, any illumination as to why the first line ' if [ -f ] ' works, but ' if [ -f ~/Library/Safari/LocalStorage/*.localstorage ] ' generates an error? [yes, I am a bit of a Noob] Code: #! /bin/bash # SafariClear0.0.6 if [ -f ] then cat /dev/null > ~/Library/Safari/LocalStorage/*.localstorage rm -f ~/Library/Safari/LocalStorage/*.localstorage fi if [ -f ~/Library/Safari/LocalStorage/*.localstorage ] then echo "Oy vey!" fi cd ~/Library/Safari/ cat /dev/null > WebpageIcons.db cat /dev/null > TopSites.plist cat /dev/null > LocationPermissions.plist cat /dev/null > LastSession.plist cat /dev/null > History.plist echo "Clear" exit

    Read the article

  • Convert DVD Movie to MPEG and view on PS3 via Windows Media Server 12

    - by Vidar
    I think Apollo spacecraft missions to the moon were easier than this! I have tried dozens of DVD ripping software and media servers and have had limited success in trying to convert all my DVDs into file format so they can be viewed on PS3. I have also been on dozens of forums and it's all getting a bit confusing, some advice is out of date, some software is no longer updated - updates have been applied to PS3 operating system and windows and so on and so on. There has to be a way to get all this knowledge and information in one place that's up to date so people can do the same thing as me. Can anyone give me some definitive software and/or advice to do the following: I have over 200 DVDs - I want to convert these to VOB files (rename to MPEG so WMS can stream them). Store on hard disk and view via Windows Media Server 12 (Windows 7). I will then be able to view these via my PS3 in my lounge and never have to get out another DVD case again. I don't want to encode to any other format like MP4 with H.264 because I will lose some of the original quality. So MPEG-2 is fine for me. Note: I have been using DVD Shrink but it gives odd results sometimes. The main problem being that once the DVD has been ripped - WMS shows the wrong playing length of the film, however if I use VLC Media Player it will play through the whole film OK. This is obviously no good when it comes to streaming on the PS3.

    Read the article

  • How to speed up a HP M9517C

    - by Jen
    I bought a system with 8GB RAM, 1TB HD, Quad-Core AMD Phenom 9550, Nvidia Geforce 9300GE, 64-bit Windows Vista Machine. Bought it primarily because it was cheap and came with 25.5 inch screen. Problem: It's slow - if you can believe it. My Dell laptop 1525 is faster and more stable! I tried installing and dual-booting Linux Mint and ran into video and audio troubles. I need fast and stable and I'm going for awesome. Anyone have some suggestions on making this thing smoking hot? Vista is fine, but slows over time - suspect virus/spyware/etc.. But I need to use Photoshop, Fireworks, Dreamweaver, Illustrator. I've tried the alternatives and I just don't like them. When you've got deadlines looming you want to work with what you know. Also use Skype (and I had audio problems with it in Linux), gotomeeting, gotowebinar. Don't need MS Office. Tried VMWare, Virtualbox and again - I keep getting audio/video problems. I'd love someone's input on THEIR setup and how they got there. I'm sure I need to upgrade my video card, but what should I go to?

    Read the article

  • ProCurve network expansion

    - by Blue Warrior NFB
    I've hit a bit of a wall with our network scale-out. As it stands right now: We have five ProCurve 2910al switches connected as above, but with 10GbE connections (two CX4, two fiber). This fully populates the central switch above, there will be no more 10GbE Ethernet connections from that device. This group of switches is not stacked (no stack directive). Sometime in the next two or three months I'll need to add a sixth, and I'm not sure how deep of a hole I'm in. Ideally I'd replace the core switch with something more capable and has more 10GbE ports. However, that's a major outage and that requires special scheduling. The two edge switches connected via fiber have dual-port 10GbE cards in them, so I could physically put another switch on the far end of one of those. I don't know how much of a good or bad idea that would be though. Is that too many segments between end-points? Some config-excerpts: Running configuration: ; J9147A Configuration Editor; Created on release #W.14.49 hostname "REDACTED-SW01" time timezone 120 module 1 type J9147A module 2 type J9008A module 3 type J9149A no stack trunk B1 Trk3 Trunk trunk B2 Trk4 Trunk trunk A1 Trk11 Trunk trunk A2 Trk12 Trunk vlan 15 name "VM-MGMT" untagged Trk2,Trk5,Trk7 ip helper-address 10.1.10.4 ip address 10.1.11.1 255.255.255.0 tagged 37-40,Trk3-Trk4,Trk11-Trk12 jumbo ip proxy-arp exit

    Read the article

  • Apache2 MPM-prefork MPM-worker multiple instances on same Ubuntu host machine possible?

    - by user60985
    I have a live Apache2/MPM-Worker instance running Django. I want to also run an Apache2/MPM-prefork instance to run some Drupal6 applications on the same host machine and utilize a vast selection of PHP modules that run on the prefork model. I plan to use my MPM-worker instance to reverse proxy to the Apache2-prefork instance for URLS starting with myhost.com/drupal6/. It seems theoretically doable/configurable by having the second Apache2-prefork instance configured to listen on an internal port, say 127.0.0.1:8080 and having my current Apache2-worker configured to proxy pass and reverse pass to it for the 'drupal6' URLs. However, how do I compile or install the apache2-prefork version so it has a different executable name than /usr/sbin/apache2, for example /usr/sbin/apache2p, and so apache2ctl has a different name, say apache2pctl, and that apache2pctl invokes the /usr/sbin/apache2p instead of /usr/sbin/apache2... and so on down the line (eg /etc/apache2p) so I can start and restart my two instances independently? As I understand it, no one executable of 'apache2' can be compiled with both the MPM-prefork and MPM-worker modules, so it seems I need two separate versions of the apache2 MPM flavors. But then I need to invoke and control them by separate names, I assume. I looked at the configuration options for apache2 and I am a bit queasy about compiling a second apache2 version with prefork because I am not sure I can set all the options so that none of my current apache2 files is overwritten. Is there a way? Is there a standard solution to separately installing and controlling prefork and worker apache2 executables on the same machine without them stepping on each other during installation or operation?

    Read the article

  • Changing Domain Name DNS to Redirect web traffic to one server, and leave mail to original server

    - by David S
    Hi there, Ok, quite the idiot with DNS.. apart from the basics. I have a domain name hosted with a domain registrar. It seems to have full DNS control (i.e. ability to view/edit A Records, Mail etc..) We have recently setup a server at Rackspace which hosts the new website The original/existing server (where the old website still is and Mail) is on another shared hosting companies server I went to the domain name registrar, and checked out the DNS management as follows: click here to view the DNS screenshot So obviously the A Record is pointing to the actual server where the website/mail is I figure, and the CNAME is pointing (alias?) to the website url. So my question is this: If I want the web traffic portion to go to the Rackspace/new server, but keep the mail going to where it is now, what do I have to change? Also, should I even change this info at the domain registrar? the rackspace server account has full DNS which seems to suggest I can point to their nameservers and then re-direct the MX (Mail) traffic to where the mail server is? Sorry if that was a bit confusing.. obviously in need of DNS training ;) Any help very appreciated. David.

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • Dell Studio 17 - turning off suddenly

    - by studiohack
    I have a Dell Studio 17 laptop, a refurbished model almost 2 years old...It is currently running Windows 7 32-bit, Home Premium. Via a clean install, it is a Vista upgrade machine...A while back, a problem started to develop while running Vista that it would suddenly just turn off. No warnings, messages, anything. It was like I had the battery out, then just unplugged it from the wall. Just like that. Over several months of this happening (or more), I've observed several things...First, it only seems to happen when I'm doing memory-intensive things, such as watching a online video full screen or running many applications in the background...Second, I can tell when it is about to "flip" as I've termed it, when the fan starts running...the computer gets really hot in places... Anyways, I'm pretty sure this is a hardware problem, because it still exists, even after a Vista-to-7 Upgrade...Is this true? Hardware vs. software? Is there anything I can do to fix this? Is it just a specific component or what? What do you recommend? Thanks!!

    Read the article

  • Passenger not booting Rails App

    - by firecall
    I'm at the end of ability, so time to ask for help. My hosting company are moving me to a new server. I've got my own VPS. It's a fresh CentOS 5 install with Plesk 9.5.2 Essentially Passenger just doesnt seem to be booting the Rails app. It's like it doesnt see it's a Rails app to be booted. I've got Rails 3.0 install with Ruby 1.9.2 built from source. I can run Bundle Install and that works. I've currently got Passenger 3 RC1 installed as per here, but have tried v2 as well. My conf/vhost.conf file looks like this: DocumentRoot /var/www/vhosts/foosite.com.au/httpdocs/public/ RackEnv development #Options Indexes I've got a /etc/httpd/conf.d/passenger.conf file which looks like this: LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0.pre4/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0.pre4 PassengerRuby /usr/local/bin/ruby PassengerLogLevel 2 and all I get is a 403 forbidden or the directory listing if I enable Indexes. I dont know what else to do! Yikes. There's nothing in the Apache error log that I can see. The new server admin isnt much help as I think he's a bit junior and says he doesnt know about Rails... sigh :/ I'm a programmer and server admin isnt my bag :(

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • Is there a way to extract a "private certificate key" from Chrome and import it into Firefox ?

    - by user58871
    This is a classical Catch-22 situation. I was using online banking the other day under Chrome. I had to order a digital certificate so that I could extend my privileges. The stupid thing is that when I got approved and opened the certificate installation menu, I saw only versions for IE/Firefox available. What the heck, I said, and chose FF - the result I got was Error 202 - ERR:CERT:INVALID. I opened FF, got to the same page, and tried to install the damn thing from there, but got a message basically saying that I must have been given a private key which obviously FF doesn't find. I read a bit, and it turned out that I really must have been given such a key but only to the browser that I ordered the cert with, i.e. Chrome. The worst thing is that if I deactivate my order, and reissue a new cert, this time from FF, I MUST go to a bank office (!!!WTF), but I am currently studying abroad, so I can't just go back. Is there a way, that I could extract that key from Chrome's profile, and import it into FF under Windows ? I will be glad to know

    Read the article

  • How do I switch java versions to an earlier version in Fedora 17?

    - by JHutson456
    I just installed Fedora 17. I'm setting up the Android Build Environment and need Java. I downloaded and installed jdk-6u32-linux-amd64.rpm I ran java -version and it spit out the correct version. Well a day or two later i tried my first compile in Fedora 17 and it complained about java and failed. I ran java -version again and low and behold it spits out $ java -version java version "1.7.0_03-icedtea" OpenJDK Runtime Environment (fedora-2.1.fc17.7-x86_64) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode) I'm stumped. I mean, i've run the update/upgrade commands since i installed but i didn't think that updated full version revisions... So, I ran alternatives --config java and that only gave me the java 1.7 version. While digging around more I discovered that the recommended version of Java for the build environment is jdk-6u27-linux-x64-rpm.bin so I downloaded that from here: Oracle Download When I ran: sudo sh jdk-6u27-linux-x64-rpm.bin it returned: Unpacking... Checksumming... Extracting... UnZipSFX 5.50 of 17 February 2002, by Info-ZIP ([email protected]). inflating: jdk-6u27-linux-amd64.rpm inflating: sun-javadb-common-10.6.2-1.1.i386.rpm inflating: sun-javadb-core-10.6.2-1.1.i386.rpm inflating: sun-javadb-client-10.6.2-1.1.i386.rpm inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm Preparing... ########################################### [100%] package jdk-2000:1.6.0_32-fcs.x86_64 (which is newer than jdk-2000:1.6.0_27-fcs.x86_64) is already installed Done. so now I'm confused. I ran: alternatives --config java again but it's still only returning 1.7 so I don't know what to do.I want to end up with 6u27 as the installed and functional version of the JDK. Thank you.

    Read the article

  • Connect bluetooth headphones both to PC and phone at the same time

    - by Sergiy Byelozyorov
    I have recently bought Philips SHB6110. Extract from the 13th page of manual: Therefore you can connect your Bluetooth stereo headset. with a Bluetooth stereo enabled phone to both listen to music and lead calls, or with a Bluetooth phone that does not support Bluetooth stereo (A2DP) to lead calls and at the same time to a Bluetooth audio device (Bluetooth enabled MP3 player, Bluetooth audio adapter etc.) to listen to music. Make sure to pair the phone first with your Bluetooth headset, then turn both the phone and headset off to then pair the Bluetooth audio device. With the SwitchStream feature you can listen to music and monitor your calls at the same time. Even while listening to music, you will hear a ring tone when receiving a call and can switch to the call simply by tapping the button. The manual however doesn't specify how do I connect to both device at the same time. I use Toshiba Satellite Pro P300-1CG laptop with Belkin Mini Bluetooth Adapter and Nokia N95 phone. Operating system is Windows 7 64-bit and I have Skype installed. Both phone and compute can be used for listening to music and talking on the phone (on PC via Skype). Best solution would be if I could connect to PC and phone as the same time and monitor calls both mobile and Skype calls while listening music from Winamp. If that is not possible, then I would like at least to be able to listen music from PC, while monitoring calls from mobile. So, please tell me how do I connect both PC and phone to headphones?

    Read the article

  • Problem configuring virtual host.

    - by Zeeshan Rang
    I am tring to configure apache virtual host for my computer. But i am facing problem in doing so. i have made required changes in my C:\WINDOWS\system32\drivers\etc\hosts then C:\xampp\apache\conf\extra\httpd-vhosts.conf I added the following lines in httpd-vhosts.conf ########################Virtual Hosts Config below################## NameVirtualHost 127.0.0.1 <VirtualHost localhost> ServerName localhost DocumentRoot "C:\xampp\htdocs" DirectoryIndex index.php index.html <Directory "C:\xampp\htdocs"> AllowOverride All </Directory> </VirtualHost> <VirtualHost virtual.cloudse7en.com> ServerName virtual.cloudse7en.com DocumentRoot "C:\development\virtual.cloudse7en.com\httpdocs" DirectoryIndex index.php index.html <Directory "C:\development\virtual.cloudse7en.com\httpdocs"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost virtual.app.cloudse7en.com> ServerName virtual.app.cloudse7en.com DocumentRoot "C:\development\virtual.app.cloudse7en.com\httpdocs" DirectoryIndex index.php index.html <Directory "C:\development\virtual.app.cloudse7en.com\httpdocs"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> ######################################################################## I started my xampp and tried http://localhost in a browser. This works and open up http://localhost/xampp/ but when i try http:http://virtual.app.cloudse7en.com it again opens up http://virtual.app.cloudse7en.com/xampp/ I do not understand the reason. Also i have a windows vista 64 bit, operating system. Do i need to make some other changes too? Regards Zee

    Read the article

  • OS X mouse pointer speed varies with different mouse

    - by Stan
    OS X Snow Leopard It seems that when using different mice on OS X may have different pointer speed and scrolling speed. For example, when using my Logitech basic laser mouse, the pointer speed is like normal. But when using MX Performance or Anywhere, it's very slow, I will have to adjust the pointer speed in mouse configuration to max. Even with max, it's still a bit slow. Basically, just feel the plug and play on OS X is terrible. I need re-adapt to it every single time. This is not the case on Windows OS. Also, the mouse scrolling speed varies with different mouse too. But usually they are all very slow, usually scroll 1 line at a time. If I adjust it in mouse configuration, it turns to scroll too much lines. I have Logitech official mouse driver (LCC) installed. But either tuning in LCC or mouse configuration doesn't make things better. Has anyone have similar issue? How to resolve it? Please advise, thanks.

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • Scheduled task does not run on WIndows 2003 server on VMWare unattened, runs fine otherwise

    - by lnm
    Scheduled task does not run on Windows 2003 server on VMWare. The same setup runs fine on standalone server. Test below explains the problem. We really need to run a more complex bat file, but this shows the issue. I have a bat file that copies a file from server A to server B. I use full path name, no drive mapping. Runs fine on server B from command prompt. I created a task that runs this bat file under a domain id with password that is part of administrator group on both servers. Task runs fine from Scheduled task screen, and as a scheduled task as long as somebody is logged into the server. If nobody is logged in, the task does not run. There is no error message in Task Scheduler log, just an entry that the task started, bit no entry for finish or an error code. To add insult to injury, if the task copies a file in the opposite direction, from server B to server A, it runs fine as a scheduled unattended task. If I copy a file from server B to server B, the task also runs fine unattended, I recreated exactly the same setup on a standalone server. No issues at all. I checked obvious things like the task has "run only as logged in" unchecked, domain id has run as a batch job privilege and logon rights, Task Scheduler service runs as a local system, automatic start. Any suggestions?

    Read the article

  • Strange corruption saving from Textpad 5 within Windows 7-64 VirtualBox VM to shared folder with Mac host

    - by joelarson
    I have a fairly new Window-7 64bit install running in Virtual Box on a MacBook Pro. I'm using TextPad 5 within that environment to edit source files that live on a shared folder that is on the Mac Host. When I save some of these source files, the saved file ends up with some amount of the end of the file repeated one or more times. For example, a file that has this at the end: ... return ttp; }; would, once saved, open up with: ... return ttp; }; }; It is definitely a problem with how the file gets written as opposed to how it's read, because I can see this now matter what app I use to open the file with (NotePad & Word in Windows 7, TextWrangler back in the Mac). I've tried saving as ANSI and UTF-8, and with or without the 'Write Unicode and UTF-8 BOM' checked in TextPad preferences. It doesn't happen with all files though I can't see any pattern about which files do or don't have the problem. It doesn't happen with files written to the Windows 7 c:\ drive. And so far it doesn't happen from other applications saving files, only TextPad. Any ideas? My versions: Textpad 5.4.2 Windows 7 Professional 64-bit, fully up to date VirtualBox 4.0.8 r71778 OSX 10.6.7

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >