Search Results

Search found 94339 results on 3774 pages for 'system data'.

Page 968/3774 | < Previous Page | 964 965 966 967 968 969 970 971 972 973 974 975  | Next Page >

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • NginX GeoIP cause extra load?

    - by Miko
    Because Nginx requires the geoip_ directives to go into the main http{ } block of the nginx.conf file, does that mean the geoip data is being pulled for every single request? In other words, does EngineX look up the geoip data for ALL of the requests coming in, even for those not needing the data? Also, nginx's documentation page lists "geoip_country" as a valid variable but if I use it, EngineX throws the following error: [emerg]: unknown "geoip_country" variable

    Read the article

  • Conditional hotkey or include in AutoHotKey

    - by Bowen
    Is there a way to define a Hotkey conditionally in AutoHotKey? I want to do different keyboard mappings for different machines with different physical keyboards. This is what I want to do: RegRead, ComputerName, HKEY_LOCAL_MACHINE, System\CurrentControlSet\Control\ComputerName, ComputerName If ( ComputerName = BDWELLE-DIM8300 ) { #Include %A_ScriptDir%\Mappings-BDWELLE-DIM8300.ahk } OR RegRead, ComputerName, HKEY_LOCAL_MACHINE, System\CurrentControlSet\Control\ComputerName, ComputerName If ( ComputerName = BDWELLE-DIM8300 ) { LWin::LAlt [more hotkey definitions that only apply to this machine] } but since AHK parses Hotkey definitions and #Include statements BEFORE interpreting If statements, the Hotkeys definitions (whether buried in an #Include or not) do not respect the If condition. Thanks for pointing me to AutoHotKey_L! Do you have a specific example of how to conditionally define a hotkey? The syntax is very confusing. Here's what I'm trying (after having installed AutoHotKey_L.exe in place of AutoHotKey.exe): RegRead, ComputerName, HKEY_LOCAL_MACHINE, System\CurrentControlSet\Control\ComputerName, ComputerName #If ( ComputerName = BDWELLE-DIM8300 ) LWin::LAlt but that doesn't seem to work...

    Read the article

  • Spreadsheet RDBMS

    - by John Nilsson
    I'm looking for a software (or set of software) that will let me combine spreadsheet and database workflows. Data entry in spreadsheet to enable simple entry from clipboard, analysis based on joins, unions and aggregates and pivot/data pilot summaries. So far I've only found either spreadsheets OR db applications but no good combination. OO base with calc for tables doesn't support aggregates f.ex. Google Spreadsheet + Visualizaion API doesn't support unions or joins, zoho db doesn't let me paste from clipboard. Any hints on software that could be used? Basically I'm trying to do some analysis of my personal bank transactions. Problem 1, ETL. The data has to be moved from my bank to a database. My current solution is to manually copy and paste the data into one spread sheet per account from my internet bank. Pains: Not very scriptable. Lots of scrolling to reach the point to paste. Have to apply sorting and formatting to the pasted data each time. Problem 2, analysis. I then want to aggregate the different accounts in one sweep to track transfers per type of transfer over all accounts. The actual aggregation is still unsolved because I can't find a UNION equivalent in the spreadsheets I've tried.

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • Fox Pro Database File Locked by Shadow Copy?

    - by leeand00
    I'm using Process Explorer to determine what process has a lock on a particular Fox Pro Database file in windows. It's telling me that System has a lock on it. When I go to kill the "System" process (which if you ask me doesn't sound like a very good idea), it asks me if I'm sure I want to kill the System process. I haven't answered yes yet. It's a company server, and I'm thinking that maybe my only option is to tell everybody to get off of it and reboot. Do I have any other options?

    Read the article

  • Keyboard doesn't work after upgrade to Debian Wheezy

    - by mikhail
    After upgrade from lenny to wheezy keyboard and mouse don't work in X (keyboard available before it starts). I looked over internet about this issue and found some solutions: remove xorg.conf (http://forums.debian.net/viewtopic.php?f=7&t=62880) update udev and base-files (http://forums.debian.net/viewtopic.php?f=6&t=64927&p=376136#p376136) remove /run directory (http://forums.debian.net/viewtopic.php?f=6&t=64927&p=376136#p376136) reintall xserver and xorg But, nothing helped me :( Logs of X-server haven't got any messages about keyboard or mouse errors. Below you can see configuration of my system: krestyaninov@xxx# uname -a Linux xxx 3.0.0-1-686-pae #1 SMP Sat Aug 27 16:41:03 UTC 2011 i686 GNU/Linux krestyaninov@xxx# dpkg -l |grep udev ii libgudev-1.0-0 172-1 GObject-based wrapper library for libudev ii libudev0 172-1 libudev shared library ii udev 172-1 /dev/ and hotplug management daemon krestyaninov@xxx# dpkg -l |grep base-files ii base-files 6.5 Debian base system miscellaneous files krestyaninov@xxx# dpkg -l |grep xorg ii xorg 1:7.6+8 X.Org X Window System ... ii xserver-xorg 1:7.6+8 X.Org X server

    Read the article

  • sudo equivalent configuration on Solaris 10

    - by daedlus
    I am looking to configure on Solaris 10 to achieve the below: user=jon group=jtu jon is owner of /opt/app user=ken group=jtu ken is owner of /data On Linux I have added the below line %jtu ALL= NOPASSWD: /bin/*, /usr/bin/* so that jon is able to access /data/tmp and delete files. This doesn't work on solaris10 since there is no sudo by default. How do I configure Solaris 10 so jon can delete files in /data/tmp? Thanks

    Read the article

  • Is there a way to tell what the download speed is from a site/server

    - by Memor-X
    i'm looking into which ISP i should go with at the place i'm moving into, one ISP which i have been told good things about has data limits (which when breached will drop your speed to dial-up speed) but multiple memberships which, apart from the cheapest membership, have the same data limits (the cheapest has a 10GB data limit) in their fine print, they say that each different membership has different port speeds, one particular part jumps out at me These speeds are the NBN (National Broadband Network) port speed and not the actual Internet data speed which will vary based on numerous factors including destination you are reaching, your network equipment, network congestion etc. i plan to use the net to download DLC and patch updates for games (particular the insanely large update for the Wii U) and games from Steam (if i find any good one other than this one JRPG) and downloading development resources from free sites like Deposit Files and Mediafire since one membership with a 1000GB data limit is $145 with the port speed being 12Mbps/1Mbps (cheapest) while another with the same limit is $190 with the port speed of 100Mbps/40Mbps (expensive) i am wondering how i can tell what the speed coming from site is since i don't want to be wasting money on speed that makes no difference (unlike memory which i rather have to spare) NOTE: the speeds are for a fiber optic network which where my new place is can only connect via fixed wireless which i may not be able to get with this ISP but if i can get this network then good NOTE 2: most of the resources i get from Deposit Files are always about 200 MB or less, if a resource pack is greater then it's split into multiple archives (like .7z.part) while Mediafire i have to see one bigger than 150MB NOTE 3: one update patch for a PS3 game is close to 4 GB (Disgaea 4) which i need to get access to the DLC and on the weekend i downloaded 5 GB for the Final Fantasy XIV Open Beta for the PS3 which took almost 5 hours

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • sudo equivalent configuration on soalris10

    - by daedlus
    Hi , I am looking to configure on solaris10 to achieve the below: user=jon group=jtu jon is owner of /opt/app user=ken group=jtu ken is owner of /data on Linux I have added the below line %jtu ALL= NOPASSWD: /bin/*, /usr/bin/* so that jon is able to access /data/tmp and delete files. This doesn't work on solaris10 since there is no sudo by default. How to configure on solaris10 for jon to be able to delete files in /data/tmp? Thanks

    Read the article

  • Project Server 2007 install issue - ProjectEventService won't start

    - by Brian Meinertz
    Trying to install PS2007 with SP1 on Server 2003. The install goes fine, but when running the SharePoint Configuration Wizard, it fails at stage 6 of 12 with the error: Failed to register SharePoint Services. An exception of type System.InvalidOperationException was thrown. Additional exception information: Cannot start service ProjectEventService on computer '.'. From the PSCDiagnostics log: Exception: System.InvalidOperationException: Cannot start service ProjectEventService on computer '.'. --- System.ComponentModel.Win32Exception: The service did not respond to the start or control request in a timely fashion. The ProjectEventService (Microsoft Office Project Server Event) won't even start manually using the Network Service account. Starting the service with a domain account works, but subsequently running the Config Wizard causes the service to be removed and re-provisioned to run using the Network Service account, which again fails. Presumably Network Service needs elevated permissions, but even adding it to the local Admin group makes no difference. Anyone come across this sort of issue before?

    Read the article

  • Set up FTP user with ProFTPD on Ubuntu

    - by kidrobot
    I want to set up a user "ftp" so they can upload and download files in my /home/httpd/mysite/public_html directory. All files in public_html are owned by user ftp and in group www-data so the ftp user looks like so: uid=108(ftp) gid=33(www-data) groups=33(www-data),65534(nogroup) When I try to connect via an FTP client I get 530 Login incorrect. ftp: Login failed. What do I need to uncomment/add to the proftpd.conf file to make this work?

    Read the article

  • RBAC configuration on solaris10

    - by scot
    Hi , I am looking for RBAC configuration on solaris10 to achieve the below: user=jon group=jtu jon is owner of /opt/app user=ken group=jtu ken is owner of /data on Linux I have added the below line %jtu ALL= NOPASSWD: /bin/*, /usr/bin/* so that jon is able to access /data/tmp and delete files. This doesn't work on solaris10 since there is no sudo by default. How to configure RBAC in solaris10 for jon to be able to delete files in /data/tmp? Thanks

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • Remove Downed Exchange Server from First Administrative Group

    - by Campo
    I had a server die. It is gone forever. It is still listed in the servers folder of the first administrative group in the Exchange System Manager. When I click "all tasks Remove Server" I get the following error: The Server "SERVERNAME" cannot be removed because: -One or more users currently use a mailbox on this server. These users must be moved to a mailbox store on a different server or be mail disabled before uninstalling this server. Facility: Exchange System Manager ID no: c103f492 Exchange System Manager Any help would be MUCH appreciated. I cannot access the mailbox stores anymore and I do not care about the lost mailboxes. We deleted the inactive and old users as well. So I am stumped on this one. I just need to remove the old machine. THANKS!

    Read the article

  • Alfresco Community Edition Consultants

    - by Talkincat
    I am in the process of putting together an document management system based on Alfresco Community 3.2r2. Because Alfresco will not allow its partners to work with the Community edition, I have found it devilishly tricky to find consultants that specialize in Alfresco to help me with this project. Can anyone point me in the direction of someone that can help me get this system up an running? I will mostly need help with integrating Alfresco with Active Directory (LDAP passthrough, user/group sync and SSO) and performance tuning the system. Any help is greatly appreciated.

    Read the article

  • HTTP Error 503. The service is unavailable

    - by user1671639
    I'm struggling to setup the environment in IIS8, I searched a lot but couldn't find a right solution. I checked the error logs, but no idea. C:\Windows\System32\LogFiles\HTTPERR 2013-10-09 09:28:39 192.168.43.205 60172 192.168.43.205 80 HTTP/1.1 GET / 503 2 AppOffline qa.hti.local 2013-10-09 09:28:39 192.168.43.205 60192 192.168.43.205 80 HTTP/1.1 GET /favicon.ico 503 2 AppOffline qa.hti.local Then in Event Viewer: WARNINGS: A listener channel for protocol 'http' in worker process '11188' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7492' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9088' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9964' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7716' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. I don't understand what the warning means. ERROR: Application pool 'qa.hti.local' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Note: I learned that consecutive 5 failures leads to APP Pool crash, and this can increased. I also tried increasing this but no success. OS: Windows server 2012 IIS Version: 8 Please share your thoughts.

    Read the article

  • Powershell script to send e-mails, losing text in e-mail attachment

    - by Beuy
    I have the following function to send e-mails in powershell with attachments: function SendMail(){ if($e -ine '') { $mail = New-Object System.Net.Mail.MailMessage $mail.From = $f; $mail.To.Add($t); $mail.Subject = $s; $att = New-Object System.Net.Mail.Attachment –ArgumentList $l $mail.Attachments.Add($att) $mail.Body = $b; $smtp = New-Object System.Net.Mail.SmtpClient($serv); $smtp.Send($mail); } } When I try to send a .txt file attachment it seems to strip out all of the information from within the text file. Any suggestions as to what's going wrong?

    Read the article

  • How to use most of memory available on MySQL

    - by Zilvinas
    I've got a MySQL server which has both InnoDB and MyISAM tables. InnoDB tablespace is quite small under 4 GB. MyISAM is big ~250 GB in total of which 50 GB is for indexes. Our server has 32 GB of RAM but it usually uses only ~8GB. Our key_buffer_size is only 2GB. But our key cache hit ratio is ~95%. I find it hard to believe.. Here's our key statistics: | Key_blocks_not_flushed | 1868 | | Key_blocks_unused | 109806 | | Key_blocks_used | 1714736 | | Key_read_requests | 19224818713 | | Key_reads | 60742294 | | Key_write_requests | 1607946768 | | Key_writes | 64788819 | key_cache_block_size is default at 1024. We have 52 GB's of index data and 2GB key cache is enough to get a 95% hit ratio. Is that possible? On the other side data set is 200GB and since MyISAM uses OS (Centos) caching I would expect it to use a lot more memory to cache accessed myisam data. But at this stage I see that key_buffer is completely used, our buffer pool size for innodb is 4gb and is also completely used that adds up to 6GB. Which means data is cached using just 1 GB? My question is how could I check where all the free memory could be used? How could I check if MyISAM hits OS cache for data reads instead of disk?

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Add a custom certificate authority to Ubuntu

    - by rmrobins
    Hello; I have created a custom root certificate authority for an internal network, example.com. Ideally, I would like to be able to deploy the CA certificate associated with this certificate authority to my Linux clients (running Ubuntu 9.04 and CentOS 5.3), such that all of the applications automatically recognize the certificate authority (i.e. I do not want to have to configure Firefox, Thunderbird, etc manually to trust this certificate authority). I have attempted this on Ubuntu by copying the PEM-encoded CA certificate to /etc/ssl/certs/ and /usr/share/ca-certificates/, as well as by modifying /etc/ca-certificates.conf and rerunning update-ca-certificates, however applications do not seem to recognize that I have added another trusted CA to the system. Therefore, is it possible to add a CA certificate once to a system, or is it necessary to manually add the CA to all of the possible applications that will attempt to make SSL connections to hosts signed by this CA in my network? If it is possible to add a CA certificate once to the system, where does it need to go? Thanks.

    Read the article

  • Looking for a USB Thumbdrive / Flash drive encryption solution (not TrueCrypt)

    - by Max888
    I am looking for a USB Thumbdrive / Flash drive encryption solution. I have searched the net but I have never come accross a solution which meets the following: Must handle at least 4GB volume If possible, fully portable (no install required required) Does not require admin rights in order to access/write encrypted files on the flash drive Does not corrupt data should the flash drive is removed from a USB port and the data is in a 'unencrypted' status Data is automatically encrypted if the flash drive is removed from a USB port and the data is in a 'unencrypted' status Portable apps must be able to run from the 'unencrypted' volume (in non-admin mode) PLEASE do not mention TrueCrypt as I am not considering (especially for wish list #3) Many thanks! Update 5th October 2009: Still unresolved.

    Read the article

  • Sharepoint 2007 Event ID 6482

    - by Dave M
    Our two server SharePoint 2007 SP2 farm has an issue. Event ID 6482 appears in the Application log of the Web front end many times a day. Often many time a minute. The full error is from Office SharePoint Server Event Type: Error Event Source: Office SharePoint Server Event Category: Office Server Shared Services Event ID: 6482 Date: 11/12/2009 Time: 3:05:22 PM User: N/A Computer: XXXXXX Description: Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (36a9b7ef-59aa-4f94-8887-8bf7b56f2f91). Reason: Error during encryption or decryption. System error code 0. Techinal Support Details: System.ArgumentException: Error during encryption or decryption. System error code 0. at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.SynchronizeDefaultContentSource(IDictionary applications) at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize() at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean isAdministrationServiceJob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The SharePoint site appears to be functioning normally and Search returns expected results. Any suggestions would be appreciated

    Read the article

  • configuration transfer over scp on commit not working on Juniper EX-2200 switch

    - by liv2hak
    I am making a series of configuration changes on Junos EX- 2200 switch.I have this router connected to another PC via an ethernet cable.The IP address of the switch is 192.168.1.1.I am able to ping from 192.168.1.1 to 192.168.1.0 and vice-versa. After the changes I make I do the following commands set system archival configuration transfer-on-commit set system archival configuration archive-sites "scp://[email protected]:/home/karthik/ws_karthik/sw1_config_1.txt" password godfather commit Where there is a user with user-name "karthik " and password "godfather".The path shown above also exists in the system How ever I don't see the configuration file sw1_config_1.txt created at the path specified. Also I have verified that sshd is running on the PC (192.168.1.10) Am I doing something wrong here? It would be great if anyone could help me out.

    Read the article

< Previous Page | 964 965 966 967 968 969 970 971 972 973 974 975  | Next Page >