Search Results

Search found 13697 results on 548 pages for 'linking errors'.

Page 391/548 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • Exchange 2010: Replication Service Still Trying to Replicate Deleted Mailbox Store

    - by ThaKidd
    In advance, thank you for your opinions! I just migrated from Server/Exchange 2003 to Server 2008 SR2 running Exchange 2010. I had an extra mailbox that appeared with some system mailboxes in it. I used the EMS to move those mailboxes over and then deleted the store out of the EMC. Since then every so often I get an Error in Event Viewer. Source: MSExchangeRepl ID: 4098 Error: The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'EXCHSERVER'. Error: (nothing after this) I can confirm that the above GUID is the mailbox store of that I deleted. No other Exchange errors occur. How can I tell Exchange Replication to ignore this store? Setup, one Exchange server 2003 transitioned over to 2010. No other Exchange servers. Is there a way to fix this? Do I need to change a setting to stop replication? I plan to add a second Exchange server in the next few days so stopping replication would be a bad thing. Thanks again in advance. Jason

    Read the article

  • Why apache doesn't restart after configuring SSL?

    - by poz2k4444
    I've installed apache2 and then configure it to work with SSL following this and this tutorials, the problem becomes when I try to restart the service, the following error throws: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs the output of netstat -anp | grep 443 just display firefox listening and anything else, how could I solve this and get the service running?? The ouput of ps -Af|grep <firefox PID> is: root 1949 1 11 18:42 tty1 00:20:55 /opt/firefox/firefox-bin root 2025 1949 4 18:43 tty1 00:08:39 /opt/firefox/plugin-container /root/.mozilla/plugins/libflashplayer.so -greomni /opt/firefox/omni.ja 1949 true plugin after closing firefox and then cheking again for port 443 the output is: tcp 0 0 10.32.208.179:38923 74.125.139.155:443 TIME_WAIT - tcp 0 0 10.32.208.179:45706 74.125.139.113:443 TIME_WAIT - tcp 0 0 10.32.208.179:40456 74.125.139.156:443 TIME_WAIT - tcp 0 0 10.32.208.179:56823 69.171.227.62:443 FIN_WAIT2 - unix 3 [ ] STREAM CONNECTED 12443 1721/dbus-daemon @/tmp/dbus-8ee35rmOOS Seeing the error logs, which are not at the time when I'm doing this, the last errors are: [Tue Oct 02 18:41:54 2012] [error] Init: Unable to read server certificate from file /etc/apache2/ssl/sever.crt [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error

    Read the article

  • Citrix client slow to launch

    - by user706837
    Was wondering if anyone else experience Citrix client to launch very slowly. While I'm a Windows SA by trade, I consider myself Novice+ on Linux, but I doubt thats the problem. This is the simple scenario: 1. Login to Citrix server to work from home 2. Click on the published application; this typically starts the local Citrix client. 3. Citrix client should start and you're off. Problem is between #2 and #3 I click on the application and 8 out of 9 times there is a 60 second delay and then I get an SSL connection error. I suspect this error is misleading since the connection took too long to open. But I dont know how to prove it (or fix it). I'm able to successfully manually launch wfcmgr without errors; so this leads me to believe Citrix client is installed correctly. I even leave it running thinking this may help, but I don't see a difference with or without this running first. The only times I'm able to connect successfully is when the Citrix client starts up a few seconds after clicking on the application. I've searched online for articles that might help, but tried a number of fixes without much difference. Even tried "ln -sf /dev/urandom /dev/random" as suggested by this article, but no dice:http://forums.citrix.com/message.jspa?messageID=1381276 My System (specs that may be relevant) Sony VAIO Laptop VGN-NW270F Linux Mint 11.04 Problem using: FireFox and Chrome Any help would be appreciated. Just trying to either find an answer or guidance on how to determine why its taking so long to launch the Citrix Client. Thanks

    Read the article

  • Problems configuring DB2 CLI/ODBC System DSN ODBC Data Source Administrator

    - by Komyg
    I am trying to create a System DSN ODBC connection to a DB2 9.5 database, but I am getting a very strange problem. I've looked through the internet and found the following page that has some instructions on how I should proceed: http://www.ryslander.com/how-to-install-and-configure-db2-odbc-driver/. I followed these instructions and I am able to create a new System DSN, however when I try to configure it it seems as if my configurations don't work at all. For example, when I click on the "Configure" button on my System DSN and I add a TCP/IP protocol configuration on the "Advanced Settings" tab and click "Ok", no errors appear, but when I click on "Configure" again my TCP/IP setting has vanished. This happened to all my other configurations, such as database name, username, password etc. Could you help me figure out what I am doing wrong? Note: my user is in the administrator group and I am using a Windows Server 2008 R2 Enterprise x64. UPDATE I managed to create a User DSN and connect to the database. However the problem with the System DSN remains.

    Read the article

  • mysql.proc has gone corrupt. How can I fix it?

    - by Metalcoder
    I have a server running Debian 5.0, and MySQL. Suddendly, MySQL stopped working, and after many attempts to fix it, I decided to reinstall it. I installed MySQL 5.1.63, and when started it goes to safe mode. I made some typing, and when I executed mysql_upgrade as root, it complained: ... Running 'mysql_fix_privilege_tables'... ERROR 1548 (HY000) at line 1111: Cannot load from mysql.proc. The table is probably corrupted ERROR 1064 (42000) at line 1112: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'sqlstate 'HY000' set message_text='Unexpected content found in the performance_s' at line 1 ERROR 1548 (HY000) at line 1125: Cannot load from mysql.proc. The table is probably corrupted FATAL ERROR: Upgrade failed I checked the mysql.proc table, and it's comment column was slightly different from my backup. -- My backup says: `comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '', -- But it were: `comment` text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, So, I restored my mysql database backup, and now they all match, but mysql_upgrade still trigger the same errors. I also tried do check and repair the mysql.proc table, but got no success.

    Read the article

  • User can't SFTP after chroot

    - by Dauntless
    Ubuntu 10.04.4 LTS I'm trying to chroot the user 'sam'. According to all the tutorials out there this should work, but apparently I'm still doing something wrong. The user: sam:x:1005:1006::/home/sam:/bin/false I changed /etc/ssh/sshd_config like this (at the bottom of the file): #Subsystem sftp /usr/lib/openssh/sftp-server # CHROOT JAIL Subsystem sftp internal-sftp Match group users ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I added sam to the users group: $groups sam sam : sam users I changed the permissions for sam's home folder: $ ls -la /home/sam drwxr-xr-x 11 root root 4096 Sep 23 16:12 . drwxr-xr-x 8 root root 4096 Sep 22 16:29 .. drwxr-xr-x 2 sam users 4096 Sep 23 16:10 awstats drwxr-xr-x 3 sam users 4096 Sep 23 16:10 etc ... drwxr-xr-x 2 sam users 4096 Sep 23 16:10 homes drwxr-x--- 3 sam users 4096 Sep 23 16:10 public_html I restarted ssh and now sam can't log in with SFTP. The session is created, but also closed immediately: Sep 24 12:55:15 ... sshd[9917]: Accepted password for sam from ... Sep 24 12:55:15 ... sshd[9917]: pam_unix(sshd:session): session opened for user sam by (uid=0) Sep 24 12:55:16 ... sshd[9928]: subsystem request for sftp Sep 24 12:55:17 ... sshd[9917]: pam_unix(sshd:session): session closed for user sam Cyberduck says Unexpected end of sftp stream. and other clients give similar errors. What did I forget / what is going wrong? Thanks!

    Read the article

  • solr php extension fails to run on newest Debian Wheezy

    - by hijarian
    I'm trying to use the Solr PHP extension on the recently-upgraded Debian Wheezy. It installs both from PECL and from sources flawlessly but instead of giving me expected functionality it gives me this on every PHP run: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/solr.so' - /usr/lib/php5/20100525/solr.so: undefined symbol: curl_easy_getinfo in Unknown on line 0 Also scripts which use the extension throws an error PHP Error[2]: include(SolrClient.php): failed to open stream: No such file or directory in file <...path to my autoloader...> My main point is that it was set up before and worked like a charm. In the upgrade among the relevant packages only the versions of PHP and libcurl was changed. Instance of Solr itself was left as is. I have all possible libcurl libraries: $ locate libcurl ... /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.3 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.2.0 /usr/lib/x86_64-linux-gnu/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.so /usr/lib/x86_64-linux-gnu/libcurl.so.3 /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/lib/x86_64-linux-gnu/libcurl.so.4.2.0 ... /usr/lib32/libcurl.so.3 /usr/lib32/libcurl.so.4 /usr/lib32/libcurl.so.4.2.0 ... I have instaled the php5-curl package version 5.4.4-2 with aptitude. I installed the Sorl extensions both with sudo pecl install solr (with various combinations of -f and -n flags and tried solr-beta too) and with wget ... cd ... phpize ./configure make make install I'm installing the 1.0.2 version of extension because it worked before the upgrade from Squeeze to Wheezy. As I said earlier, extension installs without any errors. I have already added the extension=solr.so incantation to the /etc/php5/mods-available/solr.ini What magic should I do to make solr extension work? Is this true that the only solution that I have is to downgrade the libcurl version as it was before the upgrade?

    Read the article

  • Windows NT from vmware to kvm

    - by Luca Rossi
    I'm trying to convert a couple of old Windows NT virtual servers from vmware to KVM. I tried almost all guidelines and how to I found around the web but with no luck. I have the vmware virtual disk: Dlc1.vmdk partitioned image. I converted the vmdk into qcow2 image with the qemu utility and I tried to use it with kvm: kvm -hda test.qemu -vnc :1 -m 750 but I receive "error loading operating system" I also tried with raw partitions I can mount through losetup and kpartx. but nothing changed I also tried to create an brand new image file with: qemu-img create -f qcow2 test.qcow2 2G I partitioned the new image file and I copied the original partition 1 to the new partition 1 with dd: dd if=/dev/mapper/loop1p1 of=/dev/mapper/loop0p1 bs=128M no luck again I also tried with a single unpartitioned file: qemu-img create -f qcow2 test.qcow2 2G and I copied the partition 1 to the new image file: dd if=/dev/mapper/loop0p1 of=test.img bs=128M but when booting, I receive a black screen and the virtual machine hangs. The bootloader is loaded successfully, because I also tried with a GRUB live iso and I receive the same screens and errors. Note that grub sees the Windows setup and give me the boot choice. I have the suspect the problem is that the vmware machine is probably a scsi guest and in centos 6 (my system) scsi emulation is no longer supported. But in that case, where to change in Windows? I'm not so skilled with MS systems. Thank you for the help Luca Rossi

    Read the article

  • Using the RST3 plugin in the Leo Outliner

    - by T-Boy
    I'm currently trying out the Leo Outliner, and I've heard quite a bit about the RST3 plugin that it has. I'm not planning to use Leo to program just yet -- at this point I'm wondering if it might be useful for generating HTML and PDF documents, as I'm quite currently enamored with RST and how it works. I'm using my Ubuntu Netbook Remix netbook (running 9.10, I believe). I think I've got it down pat, more or less -- I've installed docutils using the Synaptics Package Manager, and I think I've gotten SilverCity installed, as per the requirements -- I've downloaded the archive, and then run "sudo python setup.py install" with no errors. The thing is, I'm not exactly sure how to invoke the rst3 plugin itself. It doesn't appear in the Plugins menu for Leo right now, and the documentation I've managed to source doesn't seem to clearly explain how to use the plugin. Has anyone had any experience in using the rst3 plugin in Leo? It's a little confusing right now, and searches on Google doesn't seem to be helping much any more. I'm using the latest 4.7.1 final version of Leo from the Synaptics Package Manager (was informed that this would have offered the best integration with UNR, so I figured, what the heck). Have I missed out on any steps here, and are there any useful tutorials on how to use the rst3 plugin?

    Read the article

  • Recent ImageMagick on CentOS 6.3

    - by organicveggie
    I'm having a terrible time trying to get a recent version of ImageMagick installed on a CentOS 6.3 x86_64 server. First, I downloaded the RPM from the ImageMagick site and tried to install it. That failed due to missing dependencies: error: Failed dependencies: libHalf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIex.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIlmImf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libImath.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libltdl.so.3()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 I have libtool-ltdl installed, but that includes libltdl.so.7, not libltdl.so.4. I have a similar problem with libHalf, libIex, libIlmImf and libImath. Typically, you can install OpenEXR to get those dependencies. Unfortunately, CentOS 6.3 includes OpenEXR 1.6.1, which includes ilmbase-devel 1.0.1. And that release of ilmbase-devel includes newer versions of those dependencies: libHalf.so.6 libIex.so.6 libIlmImf.so.6 libImath.so.6 I next tried following the instructions for installing ImageMagick from source. No luck there either. I get a build error: RPM build errors: File not found by glob: /home/sean/rpmbuild/BUILDROOT/ImageMagick-6.8.0-4.x86_64/usr/lib64/ImageMagick-6.8.0/modules-Q16/coders/djvu.* I even re-ran configure to explicitly exclude djvu and I still get the same error. At this point, I'm pulling my hair out. What's the easiest way to get a relatively recent version of ImageMagick ( 6.7) installed on CentOS 6.3? Does someone offer RPMs with dependencies somewhere?

    Read the article

  • Apache error: could not make child process 25105 exit, attempting to continue anyway

    - by Temnovit
    Hello! I have a web server based on Ubuntu Server 9.10 with this software: apache 2 PHP 5.3 MySQL 5 Python 2.5 Few of my websites are PHP based, few use python/django through mod_wsgi. For month or so, every day my apache server stops responding until I manually restart it. Error logs show: [Fri Mar 05 17:06:47 2010] [error] could not make child process 25059 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25061 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 24930 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25084 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25105 exit, attempting to continue anyway and so on. I tried to google this problem but it seems, that I can't find a solution there. How can I determine the cause of this error and how do I fix it? Thank you for your help. UPDATE Updating mod-wsgi to version 3.1 didn't solve the problem Updating PHP to 5.3 also didn't solve it Here is a list of all installed modules: core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status mod_wsgi Here's how my virtual host with wsgi looks: <VirtualHost *:80> ServerName example.net DocumentRoot /var/www/example.net #wcgi script that serves all the thing WSGIScriptAlias / /var/www/example.net/index.wsgi WSGIDaemonProcess example user=wsgideamonuser group=root processes=1 threads=10 WSGIProcessGroup example Alias /static /var/www/example.net/static #serving admin files Alias /media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media/ <Location "/static"> SetHandler None </Location> <Location "/media"> SetHandler None </Location> ErrorLog /var/www/example.net/error.log </VirtualHost> Error log now contains two types of errors fallowed one by another: [error] child process 9486 still did not exit, sending a SIGKILL [error] could not make child process 9106 exit, attempting to continue anyway

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • Error when trying to deploy Windows XP SP3 with WDS

    - by Nic Young
    I have created a WDS server running Windows Server 2008 R2. I have built my custom images of Windows 7 using WAIK and MDT 2010 that are installed on the server. I used this guide to help me through the process. The Windows 7 images that I have created capture and deploy properly. I am attempting to follow the same steps from the guide I linked to capture and deploy a Windows XP SP3 image. I am able to sysprep and capture the reference machine with no errors. I am then able to import the custom .wim that I just captured in to MDT 2010 with no issues either. However when I try to deploy this image to a test virtual machine I get the following error: Deployment Error: I have made sure that the .iso that I am importing the source files from originally to create the sysprep and capture sequence is indeed a Windows XP SP3 iso. When I first select a PE boot environment before I deploy I select the x86 PE boot image that I created originally when making this for my Windows 7 deployments. Could this be the issue? If so how do I make a boot image specific for Windows XP SP3 deployments? I have Googled around for this error and some places point to the deployment image not being able to find setup.exe and other important system files for installing the operating system. If so, how do I add these to the image? Any ideas?

    Read the article

  • kvm works only when kvm-intel is unloaded

    - by Sathya
    I am new to kvm. I have this strange issue. But before explaining the issue, here is my set up. I try to install VM on my Host which is a Acer Laptop 5720 Has T7500 Intel processor. The cpu flags indicate that Virtualization is supported. I run Ubuntu 10.04 (lucid) on it. It comes with kvm. Now coming to the issue - I dont get any errors while executing "sudo modprobe kvm-intel". So I presume my processor does indeed support hardware virtualization. I use virt-manager and create a VM on which I install ubuntu from an *.iso file. When I start the VM it says it is running. No signs of any trouble. I can see the domain list in "virsh list". But when I try to connect to the VM thru VNC, all I get to see is a blank screen (no cursor). There is no response to any key press. I changed the video mode etc. Tried all different combinations but none work. But strangely, if I shutdown the vm an virt-manager and then unload the module by doing "sudo modprove -r kvm-intel", everything works fine. ie., I can see the screen via VNC. I am able to install the OS and so on. So what does this mean ? IS hardware virtualization not supported ? How come there is no error anywhere ? dmesg | grep kvm doesnt report anything. Can someone throw light on what excatly is happening ?

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Exchange 2007 restore - Backup Exec Unable to Attach to a resource

    - by Andy
    I have been struggling with this one for months! Grateful for any advice. The setup is a windows 2003 server network, 4xservers on the domain. Two exchange 2007 servers (only one with mailboxes still on). Backup Exec (12.5) on a non-exchange server with agents on the others. Backup exec runs a full backup of exchange across the network well, at pretty reasonable speeds. However, when you try any kind of restore (individual emails, mailboxes or whole system restore - all to same location or to alternate server, RSG etc) the following message is received within about 10-15 secs of starting the job: Job ended: 24 December 2010 at 13:28:32 Completed status: Failed Final error: 0xe000848c - Unable to attach to a resource. Make sure that all selected resources exist and are online, and then try again. If the server or resource no longer exists, remove it from the selection list. Edit the selection list properties, click the View Selection Details tab, and then remove the resource. Final error category: Resource Errors For additional information regarding this error refer to link V-79-57344-33932 Things I have already tried: Changed account to main administrator account (with all permissions) checked versions of ese.dll on both servers - both the same Checked all VSS writers on both servers are stable / normal restoring to different locations Any advice anyone could give would be much appreciated. Many thanks, Andy

    Read the article

  • Installing Munin on Centos 6

    - by justinhj
    I've hit problems installing munin on Centos 6. This seems to be a conflict between parts of Perl. I think the version of Perl is newer on Centos 6 (v5.10.1) When installing munin via yum I get errors relating to perl dependencies as below. I'm not a big enough whiz at yum or rpm to figure out the issue. Munin documentation does not yet talk about installing to Centos 6.0 Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(Net::SNMP) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: bitstream-vera-fonts Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(HTML::Template) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-SNMP Error: Package: munin-common-1.4.2-0.rpl1.el5.noarch (/munin-common-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(DBI) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Log::Log4perl) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(LWP::Simple) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(RRDs) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Date::Manip) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(CGI::Fast) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Time::HiRes)

    Read the article

  • Mysql: Disk is full writing

    - by elma
    Hi there, I'm having some problems with my mysql server lately, so I've decided to check the error logs: [root@LSN-D1179 log]# tail -10 mysqld.log 100325 19:30:03 [ERROR] /usr/libexec/mysqld: Table './lfe/actions' is marked as crashed and should be repaired 100325 19:30:03 [ERROR] /usr/libexec/mysqld: Table './lfe/actions' is marked as crashed and should be repaired 100325 19:30:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:34:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:39:46 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_posts.TMD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:40:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:44:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:49:46 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_posts.TMD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:50:18 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_task_logs.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs 100325 19:54:34 [ERROR] /usr/libexec/mysqld: Disk is full writing './omuz/ibf_profile_portal_views.MYD' (Errcode: 122). Waiting for someone to free space... Retry in 60 secs And here's is my df -h output [root@LSN-D1179 log]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 143G 6.2G 129G 5% / /dev/sda1 99M 12M 83M 13% /boot tmpfs 490M 0 490M 0% /dev/shm As you can see, I have plenty of free space; so I couldn't figure out these "Disk is full" errors in mysqld.log. Does anyone know what should I do to fix this? Ugur

    Read the article

  • best practice to removing DC from Site that no longer connects via vpn in another city

    - by dasko
    hi i am looking for a recap of what i have done already to see if i missed anything. i had two cities connected by wan using a ipsec persistent tunnel between gateways. i had one DC (DOMAIN CONTROLLER) in each city that was a global catalog server (GC) they were set up to replicate and i had them configured under Sites and Servers with their own subnet etc... about 6 months ago the one city was removed and i was not able to gracefully remove, through dcpromo, the server that was there. it is no longer used and cannot be brought back. the company went from two sites down to single site. Problem is i had a whole bunch of kcc errors and replication bugs in the event viewer. i wanted to clean up my active directory and decided to use the ntdsutil metadata cleanup commands. i removed the server from the specifed site based on a procedure from petri website. I then removed the instances of the old DC and site from Sites and Servers. Then i went and cleaned up the DNS by removing Host A records, NS server name from both the local DNS forward lookup zone and the _msdcs i also removed the reverse lookup zone for the subnet that no longer exists. is there anything i missed? thanks in advance for any help. gd

    Read the article

  • Move database from SQL Server 2012 to 2008

    - by Rich
    I have a database on a SQL Sever 2012 instance which I would like to copy to a 2008 server. The 2008 server cannot restore backups created by a 2012 server (I have tried). I cannot find any options in 2012 to create a 2008 compatible backup. Am I missing something? Is there an easy way to export the schema and data to a version-agnostic format which I can then import into 2008? The database does not use any 2012 specific features. It contains tables, data and stored procedures. Here is what I have tried so far: I tried "tasks" - "generate scripts" on the 2012 server, and I was able to generate the schema (including stored procedures) as a sql script. This didn't include any of the data, though. After creating that schema on my 2008 machine, I was able to open the "Export Data" wizard on the 2012 machine, and after configuring the 2012 as source machine and the 2008 as target machine, I was presented with a list of tables which I could copy. I selected all my tables (300+), and clicked through the wizard. Unfortunately it spends ages generating its scripts, then fails with errors like "Failure inserting into the read-only column 'FOO_ID'". I also tried the "Copy Database Wizard", which claimed to be able to copy "from 2000 or later to 2005 or later". It has two modes: 1) "detach and attach", which failed with error: Message: Index was outside the bounds of the array. StackTrace: at Microsoft.SqlServer.Management.Smo.PropertyBag.SetValue(Int32 index, Object value) ... at Microsoft.SqlServer.Management.Smo.DataFile.get_FileName() 2) SQL Management Object Method which failed with error "Cannot read property IsFileStream.This property is not available on SQL Server 7.0."

    Read the article

  • Enable RemoteApp Full Desktop programmatically

    - by Scott Chamberlain
    I am writing a powershell script to set up some HyperV VM's however there is one step I am having trouble automating. How do I check the box to allow Remote desktop access from the RemoteApp settings programmatically? I can set up all of my customizations I need by doing #build the secrity descriptor so the desktop only shows up for people who should be allowed to see it $remoteDesktopUsersSid = New-Object System.Security.Principal.SecurityIdentifier($remoteDesktopUsersGroup.objectSid[0],0) $aceTemplet = 'O:WDG:WDD:ARP(A;CIOI;CCLCSWLORCGR;;;{0})' $securityDescriptor = $aceTemplet -f $remoteDesktopUsersSid #get a copy of the WMI instance $tsRemoteDesktop = Get-WmiObject -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop #set settings $tsRemoteDesktop.Name = $ServerDisplayName $tsRemoteDesktop.SecurityDescriptor = $securityDescriptor $tsRemoteDesktop.IconPath = $IconPath $tsRemoteDesktop.IconIndex = $IconIndex #push settings back to server Set-WmiInstance -InputObject $tsRemoteDesktop -PutType UpdateOnly however the instance of that WMI object does not exist until after you have the above box checked. I attempted to use Set-WmiInstance to instantiate and set the settings at the same time but I keep getting errors like: Set-WmiInstance : At line:53 char:16 + Set-WmiInstance <<<< -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop -Arguments @{Alias='TSRemoteDesktop';Name=$ServerDisplayName;ShowInPortal=$true;SecurityDescriptor=$securityDescriptor} + CategoryInfo : NotSpecified: (:) [Set-WmiInstance], ArgumentException + FullyQualifiedErrorId : System.ArgumentException,Microsoft.PowerShell.Commands.SetWmiInstance (also after running the command and getting the error it will delete the instance of Win32_TSRemoteDesktop if it already exited and un-check the box in the properties setting) Is there any way to programmatically check that box or can anyone help with why Set-WmiInstance throws that error?

    Read the article

  • Scheduled tasks fail to start unless I'm logged in to the server

    - by Chuck
    Tasks need to open a CMD window and pass net use commands, then do a DIR command, pipping the output to a file on the server. Log in as either me (Sysadmin) or with one of the system accounts and task will only run if I'm physically logged into the server. Run as batch file is set in security properties for both users (me and service account), security is granted to all directories, etc. It almost acts like a scheduled task, since it is not physically connected to a display can't create a CMD window and pass the WinID so the command can be sent. I'm guessing. Anyone know of a document that explains how the server handles initiation of a window if done via scheduled task and no attached user is associated with the task? If I log onto the box and run the scheduled tasks they run fine, but produce no errors or event log entries and then just show that it ran successfully and sets the next run time. Have tried both with the run if logged in checkbox on and off and makes no difference. Other tasks work fine, except that they are acting on local drives with no display writing or updating taking place, so I'm guessing the system either can't instantiate a window if no display is connected to a logged on user, or it can't establish a point if it is trying to create a virtual screen. You'd think it is just creating a memory map and then mapping it to a device to display, but that doesn't seem to be the case, but I can find no documentation on how the system handles a scheduled task and how to invoke a fake or virtual screen that it could write to so it appears that a user was connected. Thanks This is driving me nuts and I've tried everything I can think of as well as our network boys ideas and nothing seems to work.

    Read the article

  • Passenger error: No such file or directory - config/environment.rb

    - by JJD
    I installed Redmine on MacOSX Server 10.6.8 according to this installation description. So far everything works fine: When I start webrick the server serves the Redmine pages. The gems and redmine are installed under the user "redmine". After that I aimed configuring apache2 with passenger as described here. As suggested by the description I also installed the passenger-pane which stores its virtual host configuration files in /private/etc/apache2/passenger_pane_vhosts. This is what I came up with after a lot of manual try and error. At least, now I can reach a passenger error page. // redmine.vhost.conf <VirtualHost *:80> ServerName host ServerAlias localhost DocumentRoot "/Users/redmine/Sites/redmine" # RackEnv production # RackBaseURI / RailsEnv production RailsBaseURI / # PassengerUser www-data # PassengerGroup www-data <Directory "/Users/redmine/Sites/redmine"> Order allow,deny Allow from all </Directory> </VirtualHost> However, the passenger module still runs into the following errors. Error message: No such file or directory - config/environment.rb The /var/log/apache2/error_log of the web server stated the following. [warn] NameVirtualHost *:80 has no VirtualHosts [notice] Apache/2.2.21 (Unix) Phusion_Passenger/3.0.12 configured -- resuming normal operations [ pid=21824 thr=2151905620 file=utils.rb:176 time=2012-06-01 18:22:07.126 ]: *** Exception Errno::ENOENT in PhusionPassenger::ClassicRails::ApplicationSpawner (No such file or directory - config/environment.rb) (process 21824, thread #<Thread:0x0000010086f2a8>): I experimented with the user switch functionality of passenger as described in the documentation - as you can tell from my configuration file. Though, I was not successful.

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >