Search Results

Search found 4514 results on 181 pages for 'totally newbie'.

Page 128/181 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Squid proxy in cent os often disconnected with error : tunnelConnectTimeout(): tunnelState->servers is NULL

    - by Ela
    I am having very often internet disconnection problem with Squid proxy service. My server config; OS: CentOS release 6.3 (Final) model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz cpu MHz : 1600.000 My Local systems IP range:192.168.2.x Server IP: 192.168.2.11 Also this server is configured with lamp for development,Samba SMB file service manager and No svn currently. So i see maximum possibility is this squid proxy since this is where it stops to connect and am sure when i restart the server net started working so something wrong with this squid service only. And this server is connected with local 14 other windows machines and basically serves as a central development node. I am able to resolve it by restarting the server fully some time or sometimes by restarting the squid proxy which is totally killing our development. I have attached my cache log file here for the error info. Cache log file Sample error log: 2013/07/01 13:25:38| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:41| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:41| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:50| clientProcessRequest: Invalid Request 2013/07/01 13:26:05| tunnelConnectTimeout(): tunnelState->servers is NULL Some help can make our lives easier, Thanks in advance.

    Read the article

  • BIOS corrupted? How to proceed? Acer Travelmate 290

    - by dtlussier
    I have a older ACER Travelmate 290 (manufactured in 2002 or 2003), which I recently tried to upgrade to Ubuntu 9.10. After doing the upgrade process it appeared as if I had a problem with my x server configuration, as on the first reboot post-install I heard the Ubuntu startup sound, but had a black screen. I thought I would then reboot again to drop down into text-mode to trouble shoot the x configuration problem. However, when I tried rebooting, something went wrong and since then when I start up the machine I get absolutely nothing except the first hardware check (i.e. HDD light flashes, CD/DVD drive spins, etc.). Other than that the screen remains totally black and I have no HDD or processor activity at all. I have tried restarting it a number of time holding down all kinds of key combinations to try and coax it into the BIOS (if possible) with no luck. I have also tried putting in both a live Linux disc and a Windows install disk without any luck. With a disk in the drive it will spin for a few seconds and then stop. All this has lead me to suspect that the BIOS is somehow corrupt (not sure about the right terminology). I have tried putting a new BIOS image and installer program downloaded from ACER on a USB key to see if it will run when I start up the machine, but no luck. I'm not sure if this method of interacting/updating/flashing the BIOS will work outside of Windows/DOS as both OSs are mentioned kind of ambiguously in the documentation. I have also taken the laptop case apart and inspected the various cards and cannot find any obviously burned out components. I'm not sure how to proceed at this point in terms of components to try, or how to try and load a new BIOS image onto the board. Any advice here would be great, especially from those with experience with this particular line of laptops. Thanks!

    Read the article

  • Share Point ACL on OSX Lion Server - Posix group always takes over ACLs

    - by Ben
    Trying to configure a share point on a Lion Server machine. The directory is created by the local server admin (serveradmin) and has rwxr-x--- given to it. The serveradmin user belongs to the local staff group so serveradmin readwrite staff group read Others none We have an OD group for all the employees (Workers) . Using the Server tool we've given Full Control to the share point: Workers Full Control serveradmin readwrite staff group read Others none We would assume that Workers could then do what they want on the share but that doesn't seem to be the case. It appears the POSIX permissions take over the ACL permissions for Worker. If I change the staff permission to readwrite then the Workers can create a file or folder in the share point. I would think the ACL should take over but it doesn't, posix always win, rendering ACL useless. Furthermore if I leave the readwrite permission for staff and take Write permission away for the Workers group then the posix group still wins. Essentially the Workers ACL does absolutely nothing. There are reports of similar problems in this Apple forum thread: https://discussions.apple.com/thread/3722901 The directory nesting fix suggested there doesn't work for us. Has anyone had similar issues and know how to fix this? Edit: in Workgroup Manager the employees user are set to primary group staff and given the additional OD group Workers. Changing their primary group doesn't help, it only shifts the problem onto Others taking over rights (logically) Edit 2: Ok, this is interesting, adding OD Users to the share's ACL works totally fine

    Read the article

  • Windows 8 freezes after every other reboot on Lenovo W520 after about 10 seconds

    - by John Nevermore
    I have a Lenovo W520 laptop with i7-2760QM, intel 520 SSD and Nvidia Quadro 1000m. When i boot the PC with discrete graphics SET in BIOS, the computer totally freezes and the only thing left to do is reboot. This only happens with NVidia drivers for Windows 8 x64 installed (I've tried about 4 different drivers on Nvidia's site). When i boot the PC with integrated graphics set in BIOS, there is a momentary "hickup" after about 10 seconds (instead of freezing) and then everything is working fine. When i boot the PC with discrete graphics ON and no Nvidia drivers installed, the same thing happens as described above with integrated graphics. I've tried doing 1) bcdedit /set disabledynamictick yes 2) Disabling VT-x in BIOS (Seriously would prefer not to disable it, since i use VM-s almost every day) but no dice. The only thing that worked was to enable the Hyper-v feature. I was then able to boot properly with discrete graphics and Nvidia drivers installed, but since i use VMWare for VM development this was no solution (VMWare complained about not being able to launch because of Hyper-v being installed). I followed the instructions in this tutorial, to be able to run VMWare. Then the computer just booted into a black screen past Windows logo. How to boot Windows 8 x64 without freezing with Quadro 1000M enabled, Nvidia Drivers installed and Hyper-v feature preferably disabled ?

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • What to do when launchpad is down?

    - by Jon
    As I am writing this (Friday, November 8, 2013 at 9:59:18 PM EST) launchpad is down. Apparently there is a power failure (https://twitter.com/launchpadstatus/status/398980619880775680). I tried running sudo apt-get update on my Ubuntu install. However, I simply get stuck on this: Ign http://ppa.launchpad.net precise InRelease 100% [Waiting for headers] Being a Ubuntu newbie, I tried to point my sources.list file to a different source. I backed up the original sources.list and then deleted the entire file to start afresh. I then added the following lines to it: deb http://mirror.anl.gov/pub/ubuntu/ precise main deb-src http://mirror.anl.gov/pub/ubuntu/ precise main I figured that since I have a different mirror, there would be no problem updating. I was wrong. I get stuck at the same place. I have several questions: Why do I need to hit launchpad? I do not reference it in my sources.list file at all. Is this something where the mirror redirects me to launchpad? Is there a good article out there that I can read on how exactly this whole apt-get update thing works that will help me understand why it is hitting launchpad? Is there any way to get my Ubuntu to update while launchpad is down? Isn't there any redundancy for the launchpad servers?

    Read the article

  • How to run a WebPy server on port 8080 using DDNS of dlink router and to access this site from internet?

    - by nuke1010
    I have two major issue with setting up a web server using my dlink DIR-600L router. Issue 1: I run a WebPy server on port 8080. But the DDNS service providers (like dlinkddns.com or dyndns.org) only allows port 80. I can run the server in port 80 with sudo command. But my server become vulnerable if i give root access. So I tried port forwarding in the router and server. But no use. I don't know if I done that correctly. Issue 2: Even though the server runs on port 80, I can access my site from my local machines only using registered domain names ( say, nikz.dyndns.org). No one on internet cannot load this site even when its totally up. As I observed server log, the request from other clients never reached my server. I need to run this server on port 8080 and i need to access this site from internet. How can I do it? any idea?

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • How do I get "Back to My Mac" (using MobileMe) from Windows?

    - by benzado
    I have a MobileMe subscription and a Mac at home with "Back to My Mac" enabled. When I'm away from home, this service lets me use another Mac to connect to my Mac back home and access file sharing, screen sharing, etc. As far as I know, the service doesn't use any proprietary protocols, so in theory I should also be able to get "Back to My Mac" from a Windows PC. This MacWorld article explains how it works. Basically, it uses Wide-Area Bonjour to give your Mac a domain name like hostname.username.members.mac.com. Remote computers can find your Mac using that address, then connect to it using a private VPN. The "Wide Area Bonjour" part seems to make it a little more complicated than simply a regular domain name, though. Note that I'm not interested in using the methods described by LifeHacker, which doesn't use the MobileMe service at all. I don't want to use a totally different dynamic DNS service. I'd like to use the one I'm already paying for, or at least find out why that's not possible from Windows. Also, my primary problem is finding a network route back to my mac... once I've got that I know how to enable services so that Windows can talk to it. UPDATE: Based on some additional research, it appears that Apple is only assigning IPv6 addresses to the hostname.username.members.mac.com names. So any solution will require enabling IPv6 support on Windows, if possible.

    Read the article

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

  • Upgrading openSUSE 11.1 with Plesk Panel 9.3 to PHP 5.3

    - by Jonathan Hogervorst
    Hello, I'm running a VPS with openSUSE 11.1 (i586). On the VPS is Parallels Plesk Panel 9.3.0 installed. The current PHP-version is PHP 5.2.11. I want to upgrade PHP to PHP 5.3, but I can't find good instructions on how to do this. If I check for updates in Zypper, it says this is the latest release. In the Plesk Updates isn't an update either, both via the webbased interface and the command line interface. On Software.openSUSE.org I can find packages for PHP 5.3.1 in both the server:php/server_apache_openSUSE_11.1-repo and the server:php/openSUSE_11.1-repo (can't post the link because I'm a newbie here). But if I add one of those to Zypper, I still don't see an update. Is there here somebody who knows how to do this? And is it completely safe to update that way? I don't want to end up with a broken VPS... Thanks! Jonathan

    Read the article

  • Virtual Wifi Issue Windows 7

    - by Matt
    Lately I've been trying to use my laptop as a wireless router in my room. I have it connected to my school's network through ethernet, and I want to set up wireless so that I can use Wifi on my Android phone and iPod Touch. In the past, I used Connectify, but I started having an issue where my phone would find the network, connect, attempt to get the IP, and then suddenly the network would disappear. Then it'd pop up again, and the same process would happen over and over. I decided that I'd totally uninstall Connectify, but after that, neither Virtual Router Manager nor the command prompt could create a viable network either. My phone and even my iPod now encounter the same problem. Neither can successfully connect. So evidently there is something wrong with the laptop's virtual wifi feature, and I have no idea what that could be. I've tried enabling certain services that virtual wifi supposedly relies on, but some of them don't start, namely Remote Access Connection Manager. But I also have read that these enable on their own and that if they are normally not enabled it's fine. Furthermore, I even uninstalled and reinstalled the drivers for my wireless card. Any ideas as to why my virtual wifi won't function? Anything? I really would love to get this working...

    Read the article

  • Why am I seeing MailSlot Browse messages on unrouted ports of my Linux box?

    - by nmichaels
    I have a Linux box (Debian squeeze) with several NICs. The ones of interest are: eth3 - my main link to the network (dhcp on 10.20.30.0/24) eth0 - the first connection to my test network (static: 192.168.1.2) eth4 - the second connection to my test network (static: 192.168.1.1) My routing table looks like this: $ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.20.30.0 * 255.255.255.0 U 0 0 0 eth3 default 10.20.30.254 0.0.0.0 UG 0 0 0 eth3 I have the 2 test net ports connected to each other with a crossover cable and an instance of wireshark running on each port. Every once in a while, I'll see a packet like the following show up. Who could be doing this, and how do I convince them to stop? I do have Samba running on the machine (for a cifs mount) but don't see why it would be sending packets out to unrouted ports. I had a Windows VM running in VMWare Client and thought that might be causing it, but it still happens without it. What I want is totally silent interfaces so I can run some tests with Scapy over them.

    Read the article

  • Apache virtualhost - only apply script if file does not exist in document root

    - by Brett Thomas
    Sorry for the newbie apache question. I'm wondering if it's possible to set up the following non-conventional apache virtualhost (for a Django app): -- If a file exists in the DocumentRoot (/var/www) it will be shown. So if /var/www/foo.html exists, then it can be seen at www.example.com/foo.html. -- If file does not exist, it is served via a virtualhost. I'm using mod_wsgi with a WSGIScriptAlias directive that points to a Django app. So if there is no /var/www/bar.html, www.example.com/bar.html will be passed to the Django app, which may or may not be a 404 error. One option is to create an Alias for each individual file/directory, but people want to be able to post a file without adding an alias, and we want to keep the above URL structure for legacy reasons. Simplified Virtualhost is: <VirtualHost *:80> ServerName www.example.com DocumentRoot /var/www WSGIScriptAlias / /path/to/django.wsgi <Directory /path/to/app> Order allow,deny Allow from all </Directory> Alias /hi.html /var/www/hi.html </VirtualHost> The goal is to have www.example.com/hi.html work as above, without the Alias line

    Read the article

  • How to set up Zabbix to monitor SQL Server Failover Active-Passive Cluster?

    - by Sebastian Zaklada
    It should be simple, so it is just most likely my approach being totally off and someone will hopefully prod me into the right direction. We have a Zabbix 2.0.3 server instance set up monitoring a bunch of different servers, but now we need to set it up to monitor and notify any alerts in regards to the SQL Server 2008 R2 Failover Active-Passive cluster. Essentially, this is a 2 servers cluster, when only one of its nodes can be "active" at a given time, serving all SQL Server related requests, while the other server just "sleeps" and from the point of anyone logged on on that server - has all of the SQL Server related services in stopped state. We have tried setting up Zabbix agents on both servers, using SQL Server 2005 templates (we could not find any 2008 specific ones and the 2005 ones always seemed to be working just fine for monitoring 2008 R2 instances) and configuring Zabbix server for both of the servers, but we end up having constant alerts for the server being currently the passive one in the cluster. We have been able to look up various methods of actually monitoring the failover, but we have not been able to find any guidance in regards to how to instruct Zabbix, that in this particular case, only one of the servers in the group is expected to be in the online state, while the other can be just discarded and should not raise any alerts. I hope I made myself clear. Thanks for any guidance. I am out of ideas.

    Read the article

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • Apache / PHP Begins to Deny SQL Requests after about 2000

    - by Daniel Stern
    We have a web page on our server that we use to run administrative scripts. For example, we might run the script "unenrolStudents()" which runs 5,000 SQL SET commands one after another and sets 5000 student entries in an SQL database to unenrolled. However, we are finding that after running a few thousand queries (it is not totally consistent) we will be "locked out" by our server. SYMPTOMS OF LOCKING OUT: - unable to connect to server with winSCP - opening putty with that connection shows a blank screen (no login / pass) - clearing cookies / cache in chrome does NOT fix locking out - other computers in the office ALSO become locked out - locking out can be triggered with a high frequency of requests (10000 in 1 second) or by less over time (10000 in 500 seconds - this will still cause a lockout even though the frequency is much less) We believe this is a security feature of our own Apache. I know we are using Suhosin but I didn't configure it so I don't know. How can I disable this locking effect so that I can confidently run all my SQL requests and they will go through? Has anyone else dealt with this and found workarounds? Thanks DS

    Read the article

  • permanent NAS-mount in Ubuntu - wrong fs type, bad option, bad superblock

    - by Emil
    My network drive shows up in the file browser, just like my external usb-harddrive. Moving, running and editing files works. Hovering over it shows smb://lacie-2big/nasdisk . BUT, when I want to save a file, the drive doesn't come up as an option. All I can see is my other places, including my usb-harddrive. I am a complete newbie but I am GUESSING that it has something to do with the mount not being a "real" mount but just a shortcut to the smb location. So I ran the tutorial at https://wiki.ubuntu.com/MountWindowsSharesPermanently about how to "mount a network drive permanently". edited my fstab to //LaCie-2big/nasdisk /media/nasmount cifs guest,uid=1000,iocharset=utf8,codepage=unicode,unicode 0 0 and running sudo mount -a gave me the following error: mount: wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program) In some cases useful info is found in syslog - try dmesg | tail or so Now thats a very helpful error message, BUT, before I go any further, I'd be really thankful if one of you could tell me if I'm even in the right ballpark, or if my actual need: to be able to download files (ie torrents) directly to the drive, can be possible as it is already. Question: How to fix "wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program" when running mount -a

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • What could be causing SVCHost to leak handles?

    - by Goz
    I have a problem that has been causing me all sorts of grief recently. SVCHost appears to be leaking resources all over the shop. This is the SVCHost run with the arguments "-k netsvcs". At the moment it is sitting at around 5,700 Handles being used. Before I rebooted the machine it was sitting at around 33,000 handles! This higher number has been causing me large problems as my software, thus, fails to obtain the handles it needs (The software tries to create around 2000 handles). I'm totally at a loss as to what is going wrong. IF anyone could help me stop this happening it would be much appreciated. I'm running on XP with SP3. Edit: I tracked this problem down to the WMI system. I'm not sure why or how the problem was occurring. Basically I used "sc change" to move it into its own process and suddenly everything seems to be fine. I'm not entirely sure what is going on ...

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Firefox 3.6 and above. Always show one tab even if all tabs are closed like Firefox 3.0

    - by Jayapal Chandran
    I very much got used to Firefox 3.0. In that, to free the memory, I close all tabs but still the main Firefox window does not close. But in Firefox 3.6 and later if I want to close all tabs and if I do so then Firefox totally exists. This is not the case with 3.0. How to stop Firefox from not closing the main process even if I close all threads (tabs)? The autocomplete feature in the address bar of Firefox 3.6 and greater is in a dark blue color which makes me very much annoyed. With my environment and the monitor glare that is inducing anger in me, so how the color be changed to be like Firefox 3.0? Because you know that black and white are a neutral and good combination and since I have been working in Firefox 3.0 (and earlier versions) for a long time this new color change and other uncomfortable options are making me sick. To check CSS3 I need to use Firefox 3.5 and greater. Besides I like Firefox because it includes the W3C's recommendations so I can learn and test new recomendations from W3C.

    Read the article

  • How to configure router to give XBox top priority / most bandwidth?

    - by MrSparky
    Hi all - Newbie here so go easy... and apologies in advance if I blow the community etiquette / rules! Here's what I'm trying to do: I have a "D-Link DIR-655 Extreme N" wireless router and an XBox 360 w/ the old-style wireless connection thing (usb attachment)... I want to configure my router / network to give my XBox as much bandwidth as it wants, whenever it wants. I have tried giving the XBox a unique IP (within my network) and then tweaking the router to treat that IP as a top priority application (using the router's QOS stuff). Problem is whenever I turn off the xbox, I can't connect to the network the next time I start it up. It seems the only reliable setting in the XBox is to use "Automatic" for the IP settings within the Network Configuration area. Supposedly the D-Link ships with default settings that attempt to recognize a game console and give it top priority... but i've not seen good results (lots of stuttering / lag when someone else jumps online, etc). Any suggestions? thanks again!

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >