Search Results

Search found 1872 results on 75 pages for 'tom geee'.

Page 21/75 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Can't install MailParse on cpanel server

    - by Tom
    Hi, I've got a linux vps running CentOs 5.5 (cpanel/whm), I've installed MailParse via Module Installers section on whm, and it did install it, the end of setup log: running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-mailparse-2.1.5" install Installing shared extensions: /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-mailparse-2.1.5" | xargs ls -dils 508718 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5 508745 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr 508746 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib 508747 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php 508748 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions 508749 4 drwxr-xr-x 2 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626 508744 196 -rwxr-xr-x 1 root root 193502 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' install ok: channel://pecl.php.net/mailparse-2.1.5 Extension mailparse enabled in php.ini The mailparse.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Now, when i try to use mailparse functions using php i get the following error: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 What should i do?

    Read the article

  • Are folders and filenames starting with "icon" illegal in SMB?

    - by dash-tom-bang
    Are five letter filenames starting with "icon" illegal in SMB? I just got a Drobo FS, in part to back up the computers in my house, and it does not accept folders named 'icons', 'iconv', or indeed I tried a bunch of other icon plus one letter names. I got errors on creation of these folders although now I don't remember the exact error. It has been confirmed with Drobo support that they "veto" files and folders named like this, due to them being illegal in the SMB spec. My Google skills so far have not been sufficient to turning any information on this up, however, so I wonder if anyone knows what's up? Sadly I can create these files and folders from my Mac, which I guess connects using AFP? But then I can't see them on my Windows machines. This is of little help if it is my Windows machines that I want to back up, and those being the ones with folders named like this. Thanks.

    Read the article

  • Detecting/Reactivating serial port that becomes inactive on Ubuntu Linux 10.10

    - by Tom
    I am using a usb2serial port to communicate with some old equipment (using my code built upon the boost asio library - I think my code is fine because it works almost all of the time). Every so often (maybe once every few days) the communication stops with my device with no error at all - the device just does not respond. I then restart my computer and everything is fine again. Does anyone know where I can start to analyse this problem? My serial port loads up fine (in /dev/ttyUSB0) and the boost library does not throw an error. The device just does not respond. If I restart the device no change - only when I restart my pc does it make a difference. I have also tried unplugging and replugging the usb connector. Does anyone know what gets cleared in the reboot (w.r.t the serial device) or what I can probe when the problem happens again (rather than just restarting with hope)

    Read the article

  • Is it possible to add files to the "Wordpress Media Library" using the command line?

    - by Tom
    Wordpress has it's own "Media Library" which is used when you upload images and other media for use in blog posts and pages. The advantage of the media library is that it automatically produces thumbnails of the images and the web interface gives you extra info such as who uploaded the image, which articles use the image, etc. My question is, does anyone have any tips on interacting with the media library via the command line instead of using the Wordpress web interface? For example, any ideas on how to add a image to the media library from the command line? If I copy files to the media library directory (usually .../wp-content/uploads/YYYY/MM/) from the command line they do not show up in the Wordpress dashboard - I guess because there needs to be an associated database entry for the media to be registered with Wordpress.

    Read the article

  • Installing List Compenent on Sharepoint Server

    - by Tom
    I added the Sharepoint site to the 'Document Management' section in CRM with the List Components checked and it added it with no problem. Also when I navigate to the 'Documents' section under an account it shows up with the format of the List components. However, if i click on 'New' or 'Actions' I get the following error message: An Error has occured in the script on this page. Error: Access is denied URL: https://*serveraddress*/crmgrid/scripts/crmmenu.htc Do you want to continue running scripts on this page? I have ran the power script which added the MIME .htc extention to IIS. Does anyone know what might be wrong?

    Read the article

  • Is there any way to make cherokee server portable?

    - by Tom
    I develop on different machines. I use MAMP, I have it installed on my dropbox folder and created symbolic links to the applications folder. That way if I work one day on my desktop and make changes to let's say a database schema and next day I work from my laptop I won't have to do any db migration stuff the same applies for all the apache virtual hosts I have setup using MAMP. Everything is portable. I recently started using Cherokee server and I like it a lot. I would like to replace MAMP with Cherokee but first I need to be able to make it portable. I don't want to have to configure multiple virtual hosts, settings, etc., on multiple machines. Is there any way I can set up Cherokee to be as portable as MAMP? What if I want to run Cherokee from a thumbdrive?

    Read the article

  • Windows Malicious Software Removal Tool log says it can't do all required actions. Should I be conce

    - by Tom
    Here's what the log file c:/Windows/debug/mrt.log of my Windows 7 install says: WARNING: Security policy doesn't allow for all actions MSRT may require. ->Scan ERROR: resource process://pid:6080 (code 0x00000005 (5)) ->Scan ERROR: resource process://pid:5300 (code 0x00000057 (87)) ->Scan ERROR: resource process://pid:3512 (code 0x00000057 (87)) I use the default setup. I didn't change anything. This is the first time I checked the log file and this warning is in there from the start. Can I do something about it? Or I shouldn't be concerned, because it can do everything what's necessary anyway? Do you have this warning in your logfile?

    Read the article

  • Navigation keys on numeric keypad randomly stop working

    - by Tom Hughes
    Shortly after a restart, the arrow and navigation (Home, End...) keys on my numeric keypad will randomly stop working, and -- regardless of the state of the NumLock -- return only numbers. I notice this the most in browser applications (like this edit box) but the same effect is true on the command line and in desktop applications like Word. I swapped keyboards and now use a Microsoft keyboard (both are USB keyboards) but the same behavior persists. I also tried a clean boot to clean out startup programs but this made no difference. The separate arrow keys and navigation keys between the QWERTY keys and the numeric keypad work fine, but my strong preference (dating back to DOS and MS Flight Simulator) is to use the navigation keys in the numeric keypad.

    Read the article

  • Exchange 2003 Offline Defrag

    - by Tom
    Looking to do my first offline defrag this weekend on Exchange 2003. Our Exchange DB is on E drive and server1 is the temporary location where there is sufficient space. Dismount the store and change to c:\program files\exchsrvr\bin Does this look like the correct command to run? eseutil /d "e:\exchdata\priv1.edb /t"\\server1\exchtemp\tempdfg.edb" Is there anything I should be aware of such as backups running at the same time etc?

    Read the article

  • Custom animation problem in Powerpoint 2003

    - by Tom Gullen
    I have an animation of a grid that scrolls to the left. It's a Gantt chart, and it works fine. I then have several elements I want to drop on the grid, I give them all the same movement animation and this works fine, everything scrolls at the same speed and time. However I want to add entrance animations for the elements on the grid, I can have it so that when all the entrance animations are complete then it starts scrolling, but I want it to enter whilst it is scrolling. How do you have an entrance animation for an element and keep it's position updated? The entrance animation and movement animation don't seem to work together as expected when run simultaneously. cheers!

    Read the article

  • eAccelerator Issue - Cache Directory Empty.

    - by Tom
    Hi all, Hoping someone can give me a hand with this. I've recently installated eAccelerator 0.9.6.1 - On a CentOS LAMP server. Had it working fine, using the /tmp/accelerator as the cache directory. php.ini set up: zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/eaccelerator.so" eaccelerator.shm_size="200" eaccelerator.cache_dir="/var/cache/eaccelerator" eaccelerator.enable="1" eaccelerator.optimizer="1" eaccelerator.check_mtime="1" eaccelerator.debug="0" eaccelerator.filter="" eaccelerator.shm_max="0" eaccelerator.shm_ttl="3600" eaccelerator.shm_prune_period="180" eaccelerator.shm_only="1" eaccelerator.compress="1" eaccelerator.compress_level="9" php -v output: PHP 5.2.12 (cli) (built: Feb 3 2010 00:34:28) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with eAccelerator v0.9.6.1, Copyright (c) 2004-2010 eAccelerator, by eAccelerator with the ionCube PHP Loader v3.3.20, Copyright (c) 2002-2010, by ionCube Ltd. I had to remove the cache directory as I was testing something. Remade it, re-set permissions and found that eAccelerator was no longer creating cache files within the folder. I thought it might be down to ownership rights on the folder so chown'd it apache.apache and this made no difference. I recreated the directory in /var/cache instead and editted php.ini to point to the new cache dir location, chmod'd, chown'd etc. and still eAccelerator is not creating any of the cache files in the directory (just empty). Could someone suggest what I might be doing incorrectly here. I've read through numerous pages to try and troubleshoot the issue to no avail. Any help appreciated.

    Read the article

  • I am looking for a tool to measure or detect "unresponsiveness" of a desktop PC

    - by Tom H
    I have a client that provides some server systems to a hospital, and a support ticket was raised that the desktop application was hanging waiting for the server. We did some extensive testing and its pretty clear that the server is responsive, and the network is fine, and that the problem is on the client end. (no requests are received during the hang etc...) We take a look at the desktop machines and they should be fine, so we raise tickets with the software vendor who says that it must be the hardware, the hardware company says that it is the software, etc etc Anyway, so talking to the nurses, they say that these machines often "hang" for 30 seconds at a time, and sometimes during important moments where they need to get data for a patient who is unwell, such as charts and status. So I want to stick a client on these machines that would be able to detect arbitrary "unresponsiveness" of the keyboard/mouse and log that for analysis later. Obviously I am wary to suggest some application that takes resources and makes the problem even worse, so I would interested to see any tools that would detect these (is it correct to say that the keyboard interrupts are being discarded?) scenarios by looking for the OS discarding the interrupts, or whatever is appropriate here. so go on then serverfault, here is your chance to save a life.... ;-) Edit: I am starting to think that some of the tools associated with real time systems might be appropriate, at least as a diagnostic.

    Read the article

  • Powerpoint 2003, change picture

    - by Tom Gullen
    I have a picture in Powerpoint 2003, how do I change the picture without having to delete it and re-add it? I need to save all the animations and it's going to take about 5 hours to re add them, but only like 20 mins if I change the pictures. Or if there is anyway to copy a custom animation set to another picture that would also be ace

    Read the article

  • DNS issue on Fedora 12? wget wordpress.org fails where wget www.google.com works

    - by Tom Auger
    I'm administering a Fedora 12 box, but am quite new to networking specifics. Recently one of our WordPress apps hosted on our server has stopped being able to perform its auto-update or auto-download of plugins. Investigating further, I have tried the following: $ wget wordpress.org --2010-12-17 11:26:50-- http://wordpress.org/ Resolving wordpress.org... failed: Temporary failure in name resolution. wget: unable to resolve host address âwordpress.orgâ Whereas: $ wget www.google.com --2010-12-17 11:27:26-- http://www.google.com/ Resolving www.google.com... 74.125.226.82, 74.125.226.84, 74.125.226.80, ... Connecting to www.google.com|74.125.226.82|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.ca/ [following] --2010-12-17 11:27:26-- http://www.google.ca/ Resolving www.google.ca... 173.194.32.104 Connecting to www.google.ca|173.194.32.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: âindex.html.4â [ <=> ] 9,079 --.-K/s in 0.02s 2010-12-17 11:27:26 (462 KB/s) - âindex.html.4â Interestingly: $ ping wordpress.org PING wordpress.org (72.233.56.138) 56(84) bytes of data. 64 bytes from wordpress.org (72.233.56.138): icmp_seq=1 ttl=50 time=81.5 ms 64 bytes from wordpress.org (72.233.56.138): icmp_seq=2 ttl=50 time=67.3 ms ^C --- wordpress.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1783ms rtt min/avg/max/mdev = 67.361/74.448/81.536/7.092 ms and $ nslookup wordpress.org Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: wordpress.org Address: 72.233.56.138 Name: wordpress.org Address: 72.233.56.139 nscd has been stopped and flushed. iptables appear to be clean. At this point I have exhausted my limited abilities to diagnose the issue. Can anyone suggest a resolution path?

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • Why doesn't Linode ever have to shut down for updates?

    - by Tom Marthenal
    I've been using Linode for over a year now, and, unlike some lesser-known VPS hosts I've used, I've never been required to shut down my VPS by Linode. The only restarts have been ones I've initiated. How do they go for years on end without requiring restarts and with no downtime? Isn't downtime inevitable when upgrading some parts of the host system? Do they simply perform as few updates as possible? This isn't meant to be a Linode-specific question; I am only using them as an example because I have experience with them.

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • decrypting AES files in an apache module?

    - by Tom H
    I have a client with a security policy compliance requirement to encrypt certain files on disk. The obvious way to do this is with Device-mapper and an AES crypto module However the current system is setup to generate individual files that are encrypted. What are my options for decrypting files on-the-fly in apache? I see that mod_ssl and mod_session_crypto do encryption/decryption or something similar but not exactly what I am after. I could imagine that a PerlSetOutputFilter would work with a suitable Perl script configured, and I also see mod_ext_filter so I could just fork a unix command and decrypt the file, but they both feel like a hack. I am kind of surprised that there is no mod_crypto available...or am I missing something obvious here? Presumably resource-wise the perl filter is the way to go?

    Read the article

  • Are there any open source reseller packages?

    - by Tom Wright
    My department has just been given the right/responsibility to manage our own VPS. The idea being that the bureaucracy will be less for the many small web projects we run. Since each project will be managed by a different team, I was planning on approaching a shared hosting model. Are there any free pieces of software that would help automate the provision of resources each time a team request a new project? Most of the projects have identical requirements - basically LAMP - so it would be these resources that I would want provisioning (and de-provisioning, if that is a word) automatically. Ideally, there would also be a way to hook it into our LDAP authentication backend too, though I could probably make this sort of modification if necessary. Since we won't be charging our "client" however, we won't need the ability to generate invoices, handle payments, etc. etc. EDIT: Sample workflow Login authenticated against LDAP Username checked against admin group (not on central LDAP) Click 'new project' and enter project name User created on VPS with project name as username Apache virtual host created and subdomain (using project name) allocated FTP & MySQL users created

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >