Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 190/1664 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • inetmgr crashes after adding IIS6 MetabaseCompatibility Role

    - by Josef
    I added the IIS6 MetabaseCompatibility Role to my Server 2008 but can no longer launch inetmgr: IISMANAGER_CRASH IIS Manager terminated unexpectedly. Exception:System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Web.Management.Host.Shell.ShellApplication.Initialize(Boolean localDevelopmentMode, Boolean resetPreferences) at Microsoft.Web.Management.Host.Shell.ShellApplication.Execute(Boolean localDevelopmentMode, Boolean resetPreferences, Boolean resetPreferencesNoLaunch) Process:InetMgr any ideas? In the meantime, I uninstalled that role but I still don't have an inetmgr (mmc snapin doesn't work either).

    Read the article

  • Restrict VPN user to Remote Desktop only with Sonicwall

    - by Matt
    Basically I want him to only be able to log onto the VPN in order to use Remote Desktop to use HIS machine. Not surf the internet or do anything like that, but just use the programs on his machine that he doesn't have at home. We use a Sonicwall NSA 220 with their regular VPN client. I can create a user for him, but when I create an access rule it applies to all VPN users. How can I make something like that only apply to ONE user?

    Read the article

  • Can I use the character ñ in a sub-domain?

    - by nute
    We are launching our website in Spanish and are probably going to call it espanol.mydomain.com Since the real spelling is español, ideally we would allow people to type español.mydomain.com. Is that something that is possible today? Can we use this character in domains and sub-domains?

    Read the article

  • .NET 3.5 SP1 Installer hang at around 80%

    - by kuoson
    Hi, I was trying to install .NET 3.5 SP1 to one of our production machines with Windows 2003 SP2. However, the installer just hang at around 80% with message "Setup is loading installation components, This may take a minute or two". In fact it's been like that for 30mins already and using 0% CPU. I disabled the anti-virus software and then applied the windows installer 4.5 update but no luck, the installer still hangs. Anyone had this problem before?

    Read the article

  • CHMOD To Prevent Deletion Of File Directory

    - by Sohnee
    I have some hosting on a Linux server and I have a few folders that I don't ever want to delete. There are sub folders within these that I do want to delete. How do I set the CHMOD permissions on the folders I don't want to delete? Of course, when I say "I don't ever want to delete" - what I mean is that the end customer shouldn't delete them by accident, via FTP or in a PHP script etc. As an example of directory structure... MainFolder/SubFolder MainFolder/Another I don't want "MainFolder" to be accidentally deleted, but I'm happy for "SubFolder" and "Another" to be removed!

    Read the article

  • Remote RAID Control ESXi Dell PowerEdge 2950 OpenManage

    - by yoyomommy
    I was wondering how one can add a drive into an existing RAID array while ESXi is still running. I have read that you are able to use Dell OpenManage to do this. I have installed OMSA 7.0 on the VMWare ESXi host (5.0 and fully updated) and I've installed OpenManage Essentials on a Windows Server 2008 R2 guest. The issue that I'm having is that OpenManage is unable to see my RAID controller. I have seen videos and photos as parts of guides on how to do this online, so I would assume that the functionality exists and I just have it set up wrong.

    Read the article

  • Setting up Windows 2008 with VPN and NAT

    - by Benson
    I have a Windows 2008 box set up with VPN, and that works quite well. NPS is used to validate the VPN clients, who are able to access the private address of the server, once connected. I can't for the life of me get NAT working for the VPN clients, though. I've added NAT as a routing protocol, and set the one on in the VPN address pool as private, and the other as public - but it still won't NAT connections when I add a route through the VPN server's IP on the client side (route add SomeInternetIp IpOfPrivateInterfaceOnServer). I know I can reach the server's private interface (which happens to be 10.2.2.1) with remote desktop client, so I can't think of any issues with the VPN.

    Read the article

  • Mod_rewrite display subdomain.domain.com and call domain.com/subdomain/ for SSL

    - by Jeff H.
    I have a website secured by a standard SSL certificate, securing a few different shops under different subdirectories. Ex. domain.com/shop1/ The shops are also accessible via a subdomain e.g. shop1.domain.com. What I'm trying to accomplish: display shop1.domain.com to the user, while keeping all of the actual server calls as domain.com/shop1, so that the secure pages will continue to work properly. (Not sure if I'm using the proper language, exactly, I hope my point is clear.) To be clear: my SSL is working fine, and I don't need help with that, and I don't need or want to purchase a UCC cert. It can't be that difficult for anyone with experience with Apache. (I've spent 3 hours trying to learn about mod_rewrite. It's just not clicking.) I'm on a GoDaddy secure shared server, so please keep in mind that I'm not able to reset the server or anything.

    Read the article

  • enabling gzip with htaccess...why is it hit or miss?

    - by adam-asdf
    I have shared hosting through Justhost. I use the HTML5 Boilerplate .htaccess (have tried other methods from here and there without luck) the compression part is as follows: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # Compress all output labeled with one of the following MIME-types <IfModule mod_filter.c> AddOutputFilterByType DEFLATE application/atom+xml \ application/javascript \ application/json \ application/rss+xml \ application/vnd.ms-fontobject \ application/x-font-ttf \ application/xhtml+xml \ application/xml \ font/opentype \ image/svg+xml \ image/x-icon \ text/css \ text/html \ text/plain \ text/x-component \ text/xml </IfModule> </IfModule> However, it isn't working—at least I don't think—My home page (html) isn't compressing, the CSS and some of the JS aren't gzipped. It is failing on HTML, CSS and JS. However, some things are (or were, who knows what it will look like when you check) gzipped. My domain is http://adaminfinitum.com/ What is weird is that the (Google) PageSpeed browser extension for Firefox (whatever the current version is [Nov. 2012]) gives me a 95% speed rating (and no warnings about compression), yet YSlow and Chrome developer tools both flag me about gzip, as does a tool I found on here while researching this. To reduce cookies I set up a subdomain on my site and I thought maybe that was it so I added an .htaccess there also, but no luck. To reduce http requests I embedded some of webfonts and images in CSS (HTML5 BP stipulates not to compress images, and apparently '.woff' files are already compressed) so I thought maybe that was it and I spent all day separating and asynchronously loading those portions (via Modernizr.load) but that hasn't helped either...if anything it made it worse due to increasing http requests (I realize speed scores of async resources may be misleading). Researching this, it seems to be a fairly common issue but I haven't found an explanation/solution. I don't think it is a MIME-type issue, I have quadruple checked (and thrice edited) my .htaccess files. My hosting company said they run Apache 2.2.22 and I have looked at everything I can find. What gives?

    Read the article

  • Graphite SQLite3 DatabaseError: attempt to write a readonly database

    - by Anadi Misra
    Running graphite under apache httpd, with slqite database, I have the correct folder permissions [root@liaan55 httpd]# ls -ltr /var/lib | grep graphite drwxr-xr-x. 2 apache apache 4096 Aug 23 19:36 graphite-web and [root@liaan55 httpd]# ls -ltr /var/lib/graphite-web/ total 68 -rw-r--r--. 1 apache apache 65536 Aug 23 19:46 graphite.db syncdb also seems to have gone fine [root@liaan55 httpd]# sudo -su apache bash-4.1$ whoami apache bash-4.1$ python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb /usr/lib/python2.6/site-packages/graphite/settings.py:231: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use STATIC_URL instead. "use STATIC_URL instead.", DeprecationWarning) /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES instead. DeprecationWarning Creating tables ... Creating table account_profile Creating table account_variable Creating table account_view Creating table account_window Creating table account_mygraph Creating table dashboard_dashboard_owners Creating table dashboard_dashboard Creating table events_event Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_session Creating table django_admin_log Creating table django_content_type Creating table tagging_tag Creating table tagging_taggeditem You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'apache'): root E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) bash-4.1$ exit and the local-settings.py file is as follows STORAGE_DIR = '/var/lib/graphite-web' INDEX_FILE = '/var/lib/graphite-web/index' DATABASES = { 'default': { 'NAME': '/var/lib/graphite-web/graphite.db', 'ENGINE': 'django.db.backends.sqlite3', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } I still get this error [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] File "/usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 344, in execute [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] return Database.Cursor.execute(self, query, params) [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] DatabaseError: attempt to write a readonly database not sure what is missing in this configuration

    Read the article

  • Searching for literal "> \" using ack-grep

    - by Stephen Gornick
    I am looking for lines that literally have a greater than character (a "") followed by a space followed by a backslash character (a "\") i.e., a line with this: \ I thought escaping would allow this, and for the greater-than it does: $ ack-grep " " returns lines that have " " in them. But when I try to escape the backslash as well I get: $ ack-grep " \" ack-grep: Invalid regex ' \': Trailing \ in regex m/ \/

    Read the article

  • Getting USB boot to work in SmartOS on HP ProLiant N40L

    - by user126579
    I recently downloaded SmartOS and tried running it on my HP ProLiant N40L, but it always fails on boot. After dd'ing the image to the USB stick, I plug it into the internal USB header and turn the machine on. After selecting from GRUB, it displays the following: , bss=0x0 It sits there for 2-4 minutes, then finally boots the OS and displays the following: WARNING: Couldn't read ACPI SRAT table from BIOS. lgrp support will be limited to one group. SunOS Release 5.11 Version joyent_20120614T184600Z 64-bit Copyright (c) 2010-2012, Joyent Inc. All rights reserved. WARNING: kvm: no hardware support After that, it hangs. I've tried this with two different USB sticks. I've seen some mentions on the SmartOS website about people running it on an N40L, booting from USB, so maybe it's just broken hardware? Has anyone gotten this working?

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • Ghost application error

    - by yaar zeigerman
    Back story for better understanding: I have a ghost made by someone else and toady some one I worked with made a new ghost in the same folder with the regular ghost I work with. I am trying to run the ghost now but get a 1808: Ghost Decompression Error and after aborting I get this message "Application Error 19225: Ghost has detected corruption in the image file. Please perform an integrity check on the image. If this problem persists, contact Symantec Technical Support at http://service.symantec.com" I would like to now if him making a new ghost could have ruined the old ghost and if not where did my problem come from and how do I fix it ?

    Read the article

  • What's the largest message size modern email systems generally support?

    - by Phil Hollenback
    I know that Yahoo and Google mail support 25MB email attachments. I have an idea from somewhere that 10MB email messages are generally supported by modern email systems. So if I'm sending an email between two arbitrary users on the internet, what's the safe upper bound on message size? 1MB? 10MB? 25MB? I know that one answer is 'don't send big emails, use some sort of drop box'. I'm looking for a guideline if you are limited to only using regular smtp email.

    Read the article

  • A record for www

    - by Manjoor
    My present DNS configuration for my website's A record is as below Name Value --------------------------- example.com 67.45.xx.xx www.example.com 67.45.xx.xx In above configuration user can open website either by example.com or by www.example.com. One of my SEO team-member argues for single point access. According to him search engine’s crawler see 2 different name with same content. It is not good and we should configure domain in such a way that if user open example.com then browser automatically get redirected to www.example.com. Now I have 2 questions Does above argument is valid? If yes then what changes I need to do in my DNS?

    Read the article

  • How do I make kdump use a permissible range of memory for the crash kernel?

    - by Philip Durbin
    I've read the Red Hat Knowledgebase article "How do I configure kexec/kdump on Red Hat Enterprise Linux 5?" at http://kbase.redhat.com/faq/docs/DOC-6039 and http://prefetch.net/blog/index.php/2009/07/06/using-kdump-to-get-core-files-on-fedora-and-centos-hosts/ The crashkernel=128M@16M kernel parameter works fine for me in a RHEL 6.0 beta VM, but not on the RHEL 5.5 hosts I've tried. dmesg shows me: Memory for crash kernel (0x0 to 0x0) notwithin permissible range disabling kdump Here's the line from grub.conf: kernel /vmlinuz-2.6.18-194.3.1.el5 ro root=/dev/md2 console=ttyS0,115200 panic=15 rhgb quiet crashkernel=128M@16M How do I make kdump use a permissible range of memory for the crash kernel?

    Read the article

  • Trying to install ffmpeg-php and having installation issues.

    - by dallasclark
    I've installed ffmpeg successfully using the ffmpeginstaller 3 series (http://www.ffmpeginstaller.com/download). ffmpeg is working fine without any known issues with bash. The ffmpeginstaller is meant to install ffmpeg-php but it cannot be found and I receive an error when I execute php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 Looking at the '/usr/lib64/php/modules/' folder, it doesn't contain the ffmpeg.so file. I've tried to install ffmpeg-php manually but I receive the following error checking for ffmpeg headers... configure: error: ffmpeg headers not found. Make sure you've built ffmpeg as shared libs using the --enable-shared option Should I install ffmpeg with series 4 or 5 of ffmpeginstaller or does someone know how to fix this issue? Thanks in advance ! System Specs cat /etc/redhat-release CentOS release 5.5 (Final) cat /proc/version Linux version 2.6.18-028stab068.5 (root@rhel5-64-build) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/ffmpeg.so' - /usr/lib64/php/modules/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP 5.2.13 (cli) (built: Mar 2 2010 18:08:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Any other details you need, just let me know.

    Read the article

  • collectd does not work

    - by bery
    I have installed collectd-5.0.0 on Fedora12 server and would like to run its service for receiving data from clients. I have enabled network plugin and rddtool plugin as commented: collectd.conf in server: BaseDir "/opt/collectd/var/lib/collectd" LoadPlugin "logfile" LoadPlugin network LoadPlugin rrdtool <Plugin network> Listen "192.168.8.37" "25826" </Plugin> collectd.conf in client: LoadPlugin logfile LoadPlugin cpu LoadPlugin network LoadPlugin memory <Plugin network> Server"192.168.8.37" "25826" </Plugin> collectd.log in server: [2011-08-03 02:36:04] Exiting normally. [2011-08-03 02:36:04] rrdtool plugin: Shutting down the queue thread. [2011-08-03 02:36:04] network plugin: Stopping receive thread. [2011-08-03 02:36:04] network plugin: Stopping dispatch thread. [2011-08-03 02:37:11] Initialization complete, entering read-loop. collectd.log in client: [2011-08-02 17:31:44] Initialization complete, entering read-loop. results thst execute netstat on server: netstat -ulpn | grep 25826 udp 0 0 192.168.8.37:25826 0.0.0.0:* 4744/collectd problem: but there is noting in "/opt/collectd/var/lib/collectd/" on ser yes,I move the port number of "25826" as your propose(But I think this is the default port for coolectd).there is no rdd files recived on server. collectd.log in client collectd [2011-08-03 10:01:36] plugin_read_thread: Handling memory'. [2011-08-03 10:01:36] plugin_read_thread: Handlingcpu'. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = used; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = user; [2011-08-03 10:01:36] uc_update: uml/memory/memory-used: ds[0] = 280412160.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-user: ds[0] = 0.100008 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = memory; plugin_instance = ; type = memory; type_instance = buffered; [2011-08-03 10:01:36] plugin_dispatch_values: time = 1312380096.431; interval = 10.000; host = uml; plugin = cpu; plugin_instance = 0; type = cpu; type_instance = nice; [2011-08-03 10:01:36] uc_update: uml/memory/memory-buffered: ds[0] = 344182784.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] uc_update: uml/cpu-0/cpu-nice: ds[0] = 0.000000 [2011-08-03 10:01:36] plugin: plugin_write: Writing values via network. [2011-08-03 10:01:36] network plugin: flush_buffer: send_buffer_fill = 1340 [2011-08-03 10:01:36] network plugin: network_send_buffer: buffer_len = 1340 ... [2011-08-03 10:01:36] plugin_read_thread: Next read of the cpu plugin at 1312380106.429064774. collectd.log in server collectd: [2011-08-03 20:18:08] type = network [2011-08-03 20:18:08] type = rrdtool [2011-08-03 20:18:08] network plugin: sockent_open: node = 192.168.8.37; service = 25826; [2011-08-03 20:18:08] fd = 3; calling bind' [2011-08-03 20:18:08] Done parsing/opt/collectd//share/collectd/types.db' [2011-08-03 20:18:08] interval_g = 10; [2011-08-03 20:18:08] timeout_g = 2; [2011-08-03 20:18:08] hostname_g = localhost.localdomain; [2011-08-03 20:18:08] Initialization complete, entering read-loop. It looks like, data is sending but doesn't be recived. Where is the mistake?

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • I can't send email from my server to gmail addresses

    - by brianegge
    I'm using postfix, and have setup spf, dkim, and domainkeys. I can get my email to go to Yahoo, but not gmail. Here's the headers from an email send to Yahoo. Yahoo reports the email as domain key verified. X-Apparently-To: brianegge at yahoo.com via 68.142.206.167; Sat, 20 Mar 2010 05:29:19 -0700 Return-Path: <domains at theeggeadventure.com> X-YahooFilteredBulk: 67.207.137.114 X-YMailISG: x7_Rl9EWLDuugoqPcORhih0FeQMOaIIpz4qfuu9ttx1xbo3uKI2kz.CLUy2cJ1BTtHAwuJtrsGRsveHIx.Dx95avNGlPPGWy_cSpnEwWLXGxBciO.YgtSQxdURQiWLCLvbHej0QPjQIHFjAFjdeGhJd2Y8NgTW1wcExq45Sb7LMlOGvtGMjSQuc8QazwXUxpZrQbIxgSQUTmzQO1x30xaZ2Us6TQTab7Wpya6OgAX.emKOM3phfS5kfhYj9FLQ.qi32sFNWnAoFdVK596OTP2F63PAJOVM5qPsM2jIAbJylIBmnj94LO7hOVr3KOS6XLtCPRn2Oe X-Originating-IP: [67.207.137.114] Authentication-Results: mta1055.mail.mud.yahoo.com from=theeggeadventure.com; domainkeys=pass (ok); from=theeggeadventure.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO mail.theeggeadventure.com) (67.207.137.114) by mta1055.mail.mud.yahoo.com with SMTP; Sat, 20 Mar 2010 05:29:19 -0700 Received: by mail.theeggeadventure.com (Postfix, from userid 1003) id BB5B01C16A4; Sat, 20 Mar 2010 12:29:16 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; s=2010; d=theeggeadventure.com; c=simple; q=dns; b=JHbK9VhqyQTfpQFqaXxJrKpEG9h9H0IZ0LdWoBooJEA7hv3SYWmFUtyE247EuwoaG gzApKJ1DuRhwESZ7PswrbzuaUL8poAUO8LmMvZ+OqnDolgNSJUYWu0FcO+fe3H4m9ZD grkj0xMpHw+uFjXV4plKO+sa8olJXJAmP+9cMEo= X-DKIM: Sendmail DKIM Filter v2.8.2 mail.theeggeadventure.com BB5B01C16A4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=theeggeadventure.com; s=2010; t=1269088156; bh=bUlMldcnzFCmCmNT8qjpRl6fiY1YyjiZiC9jhCXASOw=; h=Subject:To:Message-Id:Date:From; b=EVNolTlh4Gch5/HIrrHaRQvcApl7wkB42gB44NsPcLZD2QrhuOvnhanhnEB4UbV0e A+3dAOjhX7LKzgGrn11jXNTiEjNX1vQDsX3HyG0fNra73aWiGTzr1nHJfnuEJ7Ph0j 5tp0HRL5jjikD1XJcvmsYzTpT22mxuz60HXYRB1s= Subject: cron To: <brianegge at yahoo.com> X-Mailer: mail (GNU Mailutils 1.2) Message-Id: <[email protected]> Date: Sat, 20 Mar 2010 12:29:16 +0000 (UTC) From: This sender is DomainKeys verified [email protected] (domains) View contact details Content-Length: 818 When I send to gmail, I see the following in my server log, but the message doesn't even reach my spam folder. Mar 20 12:59:12 Everest postfix/pickup[27802]: C81C61C16A4: uid=1000 from=<egge> Mar 20 12:59:12 Everest postfix/cleanup[27847]: C81C61C16A4: message-id=<[email protected]> Mar 20 12:59:13 Everest postfix/qmgr[27801]: C81C61C16A4: from=<[email protected]>, size=2784, nrcpt=1 (queue active) Mar 20 12:59:14 Everest postfix/smtp[27849]: C81C61C16A4: to=<brianegge at gmail.com>, relay=gmail-smtp-in.l.google.com[209.85.223.24]:25, delay=2.1, delays=0.39/0.28/0.13/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1269089954 32si4566750iwn.51) Mar 20 12:59:14 Everest postfix/qmgr[27801]: C81C61C16A4: removed I've send to email to test services, and the report everything verifies ok. I've also checked all the RBL lists, and I'm not on any of them.

    Read the article

  • How to disable all bounce back email in exim 4.69

    - by liame
    I have set up an email server to send out solicited newsletters. There should be no "regular" users of this server, so it is not desirable to send bounce notifications back to the recipient. Especially so since I am tracking bounces myself by parsing the log files periodically. What I want is to unconditionally prevent exim from ever sending a bounce notification email back to a sender. How can I do this? Thank you! (I accidentally posted this to superuser before posting it here, disregard that if you come across) What I want is an email server that will accept all incoming emails, deliver it accordingly (that is remotely or locally) and not send a bounce notification the sender upon bounce. I log bounces myself, in a database. The only function bounce messages have in my setting is to waste resources and bandwidth. I need to send emails fast, using exiwhat during a run, I see a significant number of deliveries to [email protected]. I could potentially increase my email productivity by 10~20% if all bounce emails are eliminated.

    Read the article

  • Postgresql server will not start

    - by Claudiu
    I'm on Windows 7. I restarted my computer. I then tried to connect to the database and got an error. I don't remember which one in particular but it was some connection issue. I decided to try to restart the server, so I clicked on "Restart server" from the start menu. This blocked. After a few minutes I killed the process and tried again, only to get a "The service is starting or stopping. Please try again later." message. I rebooted the computer again, tried to start again, and got the same error. I killed the pg_ctl process and tried starting it manually, but that didn't work either: C:\Users\DrClaud>cscript "C:\Program Files\PostgreSQL\8.3\scripts\serverctl.vbs" start wait Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. The PostgreSQL Server 8.3 service is starting................................... ....................................... The PostgreSQL Server 8.3 service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. The start command returned an error (2) Any ideas?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >