Search Results

Search found 21142 results on 846 pages for 'bit manipulation'.

Page 615/846 | < Previous Page | 611 612 613 614 615 616 617 618 619 620 621 622  | Next Page >

  • How can we configure the Bitnami Joomla stack to open a socket on startup?

    - by bobo
    I have deployed the Bitnami Ubuntu Joomla! 3.1.5-2 (64-bit) stack on Amazon Cloud: http://bitnami.com/stack/joomla/cloud/amazon By default, the stack is configured to run PHP using PHP-FPM. I have no problem getting the Joomla and phpmyadmin running as virtual hosts on Apache. But now, I would like to add another virtual host. The problem I am having is, I have no idea how to get the system creating a socket on startup in the following folder: bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ ls -al total 12 drwxr-xr-x 2 root root 4096 Nov 3 20:43 . drwxr-xr-x 4 root root 4096 Oct 9 15:39 .. srw-rw-rw- 1 root root 0 Nov 3 20:43 joomla.sock -rw-r--r-- 1 root root 4 Nov 3 20:43 php5-fpm.pid srw-rw-rw- 1 root root 0 Nov 3 20:43 phpmyadmin.sock srw-rw-rw- 1 root root 0 Nov 3 20:43 www.sock bitnami@ip-172-31-15-99:/opt/bitnami/php/var/run$ I have the following /opt/bitnami/apps/mywebsite/conf/php-fpm/pool.conf file: [mywebsite] listen=/opt/bitnami/php/var/run/mywebsite.sock include=/opt/bitnami/php/etc/common-dynamic.conf include=/opt/bitnami/apps/mywebsite/conf/php-fpm/php-settings.conf pm=dynamic As it can be seen, listen points to the mywebsite.sock which does not currently exist. I did an experiment, by removing the .sock files in the /opt/bitnami/php/var/run folder and they would come back on reboot. So how can we configure it to open a socket for mywebsite on startup?

    Read the article

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • Overheating computer

    - by Samurai Waffle
    My computer overheats somewhat frequently, usually during intense use. And by intense use I mean browsing the internet while downloading, or gaming. It even overheats on extremely old games though, Master of Orion 2, which was developed for Windows 95. My computer has a Pentium 4 Ghz processor, 2 GB of ram, and is running Windows XP. One of the symptoms it has after overheating, is that it'll turn on immediately afterwards, but won't show any video on my monitor. I usually have to wait at least 5 minutes (mostly at least 10) before I can get it to turn on and show video on my monitor. I also usually have to wiggle around the graphics card a little bit, which is the ASUS A9550 Series with 256 MB. I'm not sure exactly what is causing the computer to overheat. At first I thought it was the video card, but after I noticed it was doing it while playing Master of Orion 2, I'm not so sure, because that game can't be making the video card work all that hard. So how exactly can I pinpoint the problem? Thanks for any help provided. Edit: Okay I downloaded the programs that you specified, and will start benchmarking my system to try and pinpoint what's overheating. What is the temperature range for when it's getting to hot? Also I have an abundant amount of software experience with computers, but unfortunately not to much hardware experience.

    Read the article

  • Gmail: security warning icon

    - by Notetaker
    Hello, I just enabled some Gmail Labs programs in my Gmail account, and then I noticed the orange triangle icon with an exclamation mark in it at the end of the address bar of my Google Chrome browser. Clicking on it brought forth a "Security Information' dialog box, with the following messages: "--mail.google.com The identity of website has been verified by Thawlte SGC CA. --Your connection to mail.google.com is encrypted with 128-bit encryption. However, this page includes other resources which are not secure. These resources can be viewed by others while in transit, and can be modified by an attacker to change the look or behavior of the page." I then logged into two of my other Gmail accounts, one of which has no Gmail Labs programs enabled, and the other with 1 program enabled quite some time ago, both with the same result as above (i.e., with the appearance of the orange triangle warning sign in the address bar). I don't remember seeing the orange triangle before, but I'm not sure if it has ever appeared or not. I have "Always use https" enabled for my Gmail accounts. My questions are: Is there a way to identify and remove these un-secure "resources"? (Could enabling Gmail Labs programs have brought these on?) Meanwhile, are my Gmail accounts compromised and unsafe to use? If so, what should I being doing about that now? After this problem is solved, would I need to reset the password to my Gmail accounts, and/or take any other measures to restore their security? Many thanks for answering my questions!

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • Reverse Proxies and AJAX

    - by osij2is
    A client of ours is using IBM/Tivoli WebSEAL, a reverse-proxy server for some of their internal users. Our web application (ASP.NET 2.0) and is a fairly straightforward web/database application. Currently, our client users that are going through the WebSEAL proxy are having problems with a .NET 3rd party control. Users who are not going through the proxy have no issues. The 3rd party control is nothing more than an AJAX dynamic tree that on each click requests all the nodes for each leaf. Now our clients claim that once users click on a node in the control, the control itself freezes in such a way that they don't see anything populate. Users see "Loading..." message appear but no new activity there afterwards. They have to leave the page and go back to the original page in order to view the new nodes. I've never worked with a reverse proxy before so I have googled quite a bit on the subject even found an article on SF. IBM/Tivoli has mentioned this issue before but this is about all they mention at all. While the IBM doc is very helpful, all of our AJAX is from the 3rd party control. I've tried troubleshooting using Firebug but by not being behind the reverse proxy, I'm unable to truly replicate the problem. My question is: does anyone have experience with reverse proxies and issues with AJAX sites? How can I go about proving what the exact issue is? Currently we're negotiating remote access so assume for the greater part that I will have access to a machine that's using the WebSEAL proxy. P.S. I realize this question might teeter on the StackOverFlow/ServerFault jurisdictional debate, but I'm trying to investigate from the systems perspective. I have no experience with reverse proxies (and I'm unclear on the benefits) and little with forwarding proxies.

    Read the article

  • installing SVN - CentOS - cannot find -lexpat

    - by furnace
    Hey guys, I'm trying to install SVN on CentOS 5. Unfortunately a simple yum install isn't going to work (afaik) because I'm using the DirectAdmin control panel. When it comes to running 'make' I get this error: /usr/bin/ld: cannot find -lexpat I'm new to installing things without yum (!) so am a bit lost. Do you have any advice on how to get past this hurdle? Just to give a little more context to the error; /apache -I/usr/include/apache -I/etc/svn-install/subversion-1.6.2/sqlite-amalgamation -o subversion/svn/util.o -c subversion/svn/util.c cd subversion/svn && /bin/sh /etc/svn-install/subversion-1.6.2/libtool --tag=CC --silent --mode=link gcc -g -O2 -g -O2 -pthread -rpath /usr/lib -o svn add-cmd.o blame-cmd.o cat-cmd.o changelist-cmd.o checkout-cmd.o cleanup-cmd.o commit-cmd.o conflict-callbacks.o copy-cmd.o delete-cmd.o diff-cmd.o export-cmd.o help-cmd.o import-cmd.o info-cmd.o list-cmd.o lock-cmd.o log-cmd.o main.o merge-cmd.o mergeinfo-cmd.o mkdir-cmd.o move-cmd.o notify.o propdel-cmd.o propedit-cmd.o propget-cmd.o proplist-cmd.o props.o propset-cmd.o resolve-cmd.o resolved-cmd.o revert-cmd.o status-cmd.o status.o switch-cmd.o tree-conflicts.o unlock-cmd.o update-cmd.o util.o ../../subversion/libsvn_client/libsvn_client-1.la ../../subversion/libsvn_wc/libsvn_wc-1.la ../../subversion/libsvn_ra/libsvn_ra-1.la ../../subversion/libsvn_delta/libsvn_delta-1.la ../../subversion/libsvn_diff/libsvn_diff-1.la ../../subversion/libsvn_subr/libsvn_subr-1.la /etc/httpd/lib/libaprutil-1.la -lexpat /etc/httpd/lib/libapr-1.la -luuid -lrt -lcrypt -lpthread -ldl /usr/bin/ld: cannot find -lexpat collect2: ld returned 1 exit status make: *** [subversion/svn/svn] Error 1 Thanks!

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • Preseeding Ubuntu partman recipe using LVM and RAID

    - by Swav
    I'm trying to preseed Ubuntu 12.04 server installation and created a recipe that would create RAID 1 on 2 drives and then partition that using LVM. Unfortunately partman complains when creating LVM volumes saying there no partitions in recipe that could be used with LVM (in console it complains about unusable recipe). The layout I'm after is RAID 1 on sdb and sdc (installing from USB stick so it takes sda) and then use LVM to create boot, root and swap. The odd thing is that if I change the mount point of boot_lv to home the recipe works fine (apart from mounting in wrong place), but when mounting at /boot it fails I know I could use separate /boot primary partition, but can anybody tell me why it fails. Recipe and relevant options below. ## Partitioning using RAID d-i partman-auto/disk string /dev/sdb /dev/sdc d-i partman-auto/method string raid d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true #d-i partman-lvm/confirm boolean true d-i partman-auto-lvm/new_vg_name string main_vg d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 -1 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 256 512 256 ext3 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext3 } \ mountpoint{ /boot } \ lv_name{ boot_lv } \ . \ 2000 5000 -1 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ lv_name{ root_lv } \ . \ 64 512 300% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ lv_name{ swap_lv } \ . d-i partman-auto-raid/recipe string \ 1 2 0 lvm - \ /dev/sdb1#/dev/sdc1 \ . d-i mdadm/boot_degraded boolean true #d-i partman-md/confirm boolean true #d-i partman-partitioning/confirm_write_new_label boolean true #d-i partman/choose_partition select Finish partitioning and write changes to disk #d-i partman/confirm boolean true #d-i partman-md/confirm_nooverwrite boolean true #d-i partman/confirm_nooverwrite boolean true EDIT: After a bit of googling I found below snippet of code from partman-auto-lvm, but I still don't understand why would they prevent that setup if it's possible to do manually and booting from boot partition on LVM is possible. # Make sure a boot partition isn't marked as lvmok if echo "$scheme" | grep lvmok | grep -q "[[:space:]]/boot[[:space:]]"; then bail_out unusable_recipe fi

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • ASDIEdit Cleanup After Exchange 2003 Crash During Transition To Exchange 2010

    - by ThaKidd
    Hello all. I would value some input from a few experts. I have almost completed the transition from Exchange 2003 Standard to Exchange 2010 Standard. Everything went smoothly until I tried to uninstall Exchange 2003. At that point the server bit the dust and died completely. I now have NO access to the old Exchange System Management MMC as I am running Windows 2008 SR2 and Windows 7 only. I can only fix this with ASDIEdit, EMShell, and EMConsole. I have used the 2010 shell to move/remove/verify that all mailboxes, public folders and OAB are hosted on Exchange 2010. I also verified that the routing connector has been deleted. The only two things that were not done was to remove the Recipient Update Service and actually perform the removal of the 2003 software. I have spent a lot of time going through ASDIedit and have located the old Administrative Group and the Exchange 2003 server listed under it. I also located the Recipient Update Service which includes two entries; Enterprise and my domain name. I have read that it is an unwise idea to remove the old administrative group so I won't bother messing with that. I am repeatedly getting three warnings in the Application Log. Both are from MSExchangeTransport EventID 5006 (Cannot find route to Mailbox Server OLDSERVER) and 5020 (The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003) So my questions are: To clean out AD of the old Exchange 2003 info, can I delete the server name folder (Configuration - Services - Microsoft Exchange - ExchOrg - Administrative Groups - First Administrative Group - Servers - Old Server) and also delete the Update Recipient Service (Enterprise) and Update Recipient Service (DOMAIN) containers safely? Are there any additional items I need to address to ensure the AD is clean? Thanks in advance for your help!

    Read the article

  • Fast User Switching still disabled after disabling Cisco AnyConnect VPN's "Start Before Login" feature

    - by mindless.panda
    I am running Windows 7 64 bit Ultimate and using Cisco AnyConnect VPN 2.5.3041. As expected, Fast User Switching got disabled as soon as I installed the VPN software. This FAQ from Cisco references how to enable Fast User Switching when their VPN product is installed: A. Microsoft automatically disables Fast User Switching in Windows XP when a GINA.dll is specified in the registry. The Cisco VPN Client installs the CSgina.dll to implement the "Start Before Login" feature. If you need Fast User Switching, then disable the "Start Before Login" feature. Registered users can get more information in Cisco Bug ID CSCdu24073 (registered customers only) in Bug Toolkit. My problem is that I have disabled this on the client, but fast user switching is still greyed out. This article mentions a registry edit, however they key they mention, GinaDLL, does not exist at the WinLogon registry point. Update: This article from Cisco covering AnyConnect specifically gives a one liner: AnyConnect is not compatible with fast user switching. The only problem is I now I had found a workaround before the last reformat/reinstall, but I can't remember what exactly I did previously.

    Read the article

  • Experiences with BIRD for BGP?

    - by Shtééf
    We're currently using Quagga with Debian Linux to run a full table BGP router. The set-up has been dead simple up to now, but we've come to a point where I have to reconfigure the router quite a bit, and want to tighten things up. I've never really understood Quagga, and always found its documentation to be lacking. It appears to be mimicking Cisco, of which I only have basic understanding. BIRD has caught my eye recently. The couple of articles / presentations I found promote it as lightweight and more responsive under stress compared to Quagga. And it actually seems to have very decent documentation. So I'd like to know: Who's running BIRD right now, and in what kind of set-up? How is it stability-wise? I've read about it running in a couple of sites in production. Let's say I don't care at all for a Cisco-feel to configuration. How is configuration, maintainance, monitoring, etc. of BIRD in general? And any other notable experiences you may have with it.

    Read the article

  • Hardware reserved memory issue

    - by Robert Koritnik
    I've seen lots of folks having problem with hardware reserved memory issue in Windows 7/Server 2008 R2. I have it myself but not as huge as others have. Problem description When you install Windows 7 (or its bigger brother Windows Server 2008 R2) your memory may not be fully utilised. If you look at Task Manager > Performance Tab > Resource Monitor > Memory Tab And scroll to the bottom of the list you will see a graphical representation of your memory. Some of it may be hardware reserved. Previous Windows versions didn't have this problem. System was able to utilise all memory available. Question Is there any solution to lower/remove hardware reserved memory? Sidenote I tried installing 32 and 64 bit versions but to no avail. I also tried both Windows: 7 and Server 2008 R2. But always get the same amount reserved by HW. On previous Windows versions I had more memory available because I'm simultaneously running 2 VMs on host (so three machines all together). And my memory peaks much higher now as it did on older versions.

    Read the article

  • Destination host unreachable - Windows Server 2008

    - by Doug
    Hi There, I'm working with a windows 2008 domain controller, which I'm having issues connecting to internet resources. A small bit of background, this is a 2008 domain controller that has been added into an existing Win 2k domain, with a goal of replacing the older computers. Both of the older controllers can still access internet resources, and so can all the clients. When I ping Google.ca from the new server, it does resolve to an ip address, but then says "Reply from 192.168.123.20: Destination host unreachable." I'm really at a lost now, I've checked and rechecked my ip configuration, the default gateway is my router, the primary DNS server is the my DC, and the secondary DNS is also my router. The DNS server on the domain has a forwarder added for the router as well. Everything on my local network works just fine, all my internal resources can be resolved. For the time being, I've stopped the Firewall service. I'm not 100% used to Server 2008 yet, but it might be a case of just missing something simple. Thanks for your time.

    Read the article

  • Destination host unreachable - Windows Server 2008

    - by Doug
    Hi There, I'm working with a windows 2008 domain controller, which I'm having issues connecting to internet resources. A small bit of background, this is a 2008 domain controller that has been added into an existing Win 2k domain, with a goal of replacing the older computers. Both of the older controllers can still access internet resources, and so can all the clients. When I ping Google.ca from the new server, it does resolve to an ip address, but then says "Reply from 192.168.123.20: Destination host unreachable." I'm really at a lost now, I've checked and rechecked my ip configuration, the default gateway is my router, the primary DNS server is the my DC, and the secondary DNS is also my router. The DNS server on the domain has a forwarder added for the router as well. Everything on my local network works just fine, all my internal resources can be resolved. For the time being, I've stopped the Firewall service. I'm not 100% used to Server 2008 yet, but it might be a case of just missing something simple. Thanks for your time.

    Read the article

  • Why can't I reconnect to my Philips SHB9000 bluetooth headset?

    - by Thomas Eyde
    This is so annoying. When I connected my Philips SHB9000 bluetooth to my Windows 7 (64 bit) for the very first time, it worked well. I had to manually change the default playback device, but otherwise it worked. Then, when I start up my computer from standby, it's nearly random when I can reconnect or not. My last option is to remove the bluetooth device and reconnect it. But now, even that doesn't work. The sad thing is, this used to work better on Windows 7 beta. Windows Update has a new driver which fails to install. Searching for this driver yields nothing. I thought all vendors had an official site for their drivers? Well, Philips seems to have none. If there is no answer to this problem, my advice is to NOT buy this headset. It's good looking, the sound is nice, but what need do we have from that if we can't use the bloody device?

    Read the article

  • What is Apache Synapse?

    - by Aren B
    My website keeps getting hit by odd requests with the following user-agent string: Mozilla/4.0 (compatible; Synapse) Using our friendly tool Google I was able to determine this is the hallmark calling-card of our friendly neighborhood Apache Synapse. A 'Lightweight ESB (Enterprise Service Bus)'. Now, based on this information I was able to gather, I still have no clue what this tool is used for. All I can tell is that is has something to do with Web-Services, and supports a variety of protocols. The Info page only leads me to conclude it has something to do with proxies, and web-services. The problem I've run into is that while normally I wouldn't care, we're getting hit quite a bit by Russian IPs (not that russian's are bad, but our site is pretty regionally specific), and when they do they're shoving wierd (not xss/malicious at least not yet) values into our query string parameters. Things like &PageNum=-1 or &Brand=25/5/2010 9:04:52 PM. Before I go ahead and block these ips/useragent from our site, I'd like some help understanding just what is going on. Any help would be greatly appreciated :)

    Read the article

  • mercurial hgwebdir error with basicauth in apache2

    - by Dio
    Hello, I'm having kind of a strange error that I'm trying to track down. I was trying to setup mercurial on my home server this weekend. I seem to have it running up to the point where I'm trying to get repositories published correctly. I'm running Ubuntu 10.04 LTS Mercurial Distributed SCM (version 1.4.3) I followed the hgwebdir guide: http://mercurial.selenic.com/wiki/HgWebDirStepByStep and everything seems to work great, I can pull and push my local repositories. Then I tried to add basic auth changing ScriptAliasMatch ^/hg(.*) /var/hg/hgwebdir.cgi$1 <Directory "/var/hg"> Options ExecCGI FollowSymLinks AllowOverride None </Directory> to ScriptAliasMatch ^/hg(.*) /var/hg/hgwebdir.cgi$1 <Directory "/var/hg"> Options ExecCGI FollowSymLinks AllowOverride None AuthType Basic AuthName hgwebdir AuthUserFile /usr/local/etc/httpd/users Require valid-user </Directory> This works exactly as I'd expect it to when I navigate to the directory via my web browser, but when I hg push get a long section repeating of File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib/python2.6/urllib2.py", line 855, in http_error_401 url, req, headers) File "/usr/lib/python2.6/urllib2.py", line 833, in http_error_auth_reqed return self.retry_http_basic_auth(host, req, realm) File "/usr/lib/python2.6/urllib2.py", line 843, in retry_http_basic_auth return self.parent.open(req, timeout=req.timeout) followed by File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 249, in do_open self._start_transaction(h, req) File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 419, in _start_transaction return keepalive.HTTPHandler._start_transaction(self, h, req) File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 342, in _start_transaction h.endheaders() File "/usr/lib/python2.6/httplib.py", line 904, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 776, in _send_output self.send(msg) File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 247, in _sendfile connection.send(self, data) File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 519, in safesend self.connect() File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 273, in connect keepalive.HTTPConnection.connect(self) RuntimeError: maximum recursion depth exceeded while calling a Python object I'm a bit at a loss on this one. I'm really not sure why adding the authorization seems to work fine via my web browser but throw these errors from hg. Any help would be greatly appreciated.

    Read the article

  • 3d Studio Max and 2+CPUs - Core limit ?

    - by FreekOne
    Hi guys, I am scouting for parts to put in a new machine, and in the process, while looking at different benchmarks I stumbled upon this benchmark and it got me a bit worried. Quote form it: Noticably absent from this review is an old-time favorite, 3ds Max. I did attempt to run our custom 3ds Max benchmark on both the 2009 and 2010 versions of the software, but the application would simply not load on the Westmere box with hyper-threading enabled. Evidently Autodesk didn't plan far enough ahead to write their software for more than 16 threads. Once there is an update that addresses this issue, I will happily add 3ds Max back into the benchmarking mix. Since I was looking at dual hexa-core Xeons (x5650), that would put my future machine at 24 logical cores which (duh) is well over 16 cores and since I'm mostly building this for 3DS Max work, you can see how this would seriously spoil my plans. I tried looking for additional information on this potential issue, but the above article seems to be the only one who mentions it. Could anyone who has access to a 16 core machine or an in-depth knowledge about 3DS Max please confirm this ? Any help would be much appreciated !

    Read the article

  • Recovering a VHD after resizing it using VBoxManage

    - by tjrobinson
    I am using VirtualBox 4.1.18 and had a virtual machine running Windows 8 RC with a single VHD, which was initially sized at 25GB (too small!). After installing the OS and some applications I ran out of disk space so shut down the guest and then used this command to resize the VHD to 80GB: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyhd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" --resize 81920 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" UUID: 03fb26e7-d8bb-49b5-8cc2-1dc350750e64 Accessible: yes Logical size: 81920 MBytes Current size on disk: 24954 MBytes Type: normal (base) Storage format: VHD Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd So far so good. However, when I started the guest up again I got the dreaded: Fatal: No bootable medium found! system halted If I boot into GParted it shows a single 80GB drive as "unallocated". The option to scan for and attempt to repair a filesystem doesn't find anything. I also tried cloning the VHD into a VDI file, just in case that magically fixed it: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe clonehd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b Accessible: yes Logical size: 81920 MBytes Current size on disk: 24798 MBytes Type: normal (base) Storage format: VDI Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi Is there anything else I could try to recover the drive? No, I don't have a backup :( My host OS is Windows 7 64-bit.

    Read the article

  • Drive XML returning Windows Volume Shadow Service Error

    - by Ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • Debian and Multipath IO problem

    - by tearman
    Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed. Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN. My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being. Then once I get the unit online, I'll install the firmware drivers. This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways. Once I get back online, I have to install multipath-tools (the framework is a multipath framework). However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root. Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

    Read the article

  • MS SQL - Problem running SQL Server Agent Job via service account credentials

    - by molecule
    There are 5 steps in this job. First job is an SSIS Package store, second to fifth are file system jobs. We configured all jobs to use Windows Authentication. Under Run As, we specified a user account which was created under SecurityCredentials and SQL Server AgentProxiesSSIS Package execution. The job runs without any problems with this user account. We then proceeded to configure the job to use a service account instead. Service account was specified under SecurityCredentials and SQL Server AgentProxiesSSIS Package Execution. The job fails with this error. Executed as user: domain\serviceaccount. ....00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 3:37:57 PM Error: 2010-03-09 15:37:57.95 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. End Error Error: 2010-03-09 15:38:01.19 Code: 0xC0047062 Source: Get CONT_VIEW_LADDER in latest 45days OracleFMDatabase [1] Description: System.Data.OracleClient.OracleException: ORA-01005: null password given; logon denied at System.Data.OracleClient.OracleException.Check(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boo... The package execution fa... The step failed. Based on some research, I then go into MS Visual Studio and Open the project. I change the property of the package security from "EncryptSensitiveWithUserKey" to "DontSaveSensitive" but i still get the above error. I am new to this so any help will be very much appreciated. Thanks in advance

    Read the article

< Previous Page | 611 612 613 614 615 616 617 618 619 620 621 622  | Next Page >