Search Results

Search found 59118 results on 2365 pages for 'data persistence'.

Page 1009/2365 | < Previous Page | 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016  | Next Page >

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • cannot at all find sql instance (while installing an asp.net app on IIS)

    - by giddy
    So I'm really not a DBA, I'm an app dev. I had to install my asp.net mvc3 app on my client's(a large company) IIS6 + Win2k3 machine, with absolutely no help from their sysadmins. The final problem now is SQL Server 2008 r2, after figuring out how to create a login from windows, my app and sqlcmd.exe always complains it cannot find a sql server instance!! I have all the sql services (in services.msc) running to Log On as the local system. I can login fine with SQL Server Management Studio with Windows Auth. I created my database, my asp.net app needs/uses windows auth. But for the love of God, whatever I do my app always complains it cannot find the instance. (Also tried running SQL CMD and it complains of the same thing too!) My data base connection string looks like this: Data Source=machinename\username;Initial Catalog=myDataStore;Integrated Security=True;MultipleActiveResultSets=True Machinename\user is the same thing that shows up on the sql server management studio login if I choose windows authentication right?

    Read the article

  • Enterprise online backup providers

    - by PHLiGHT
    We've used Iron Mountain's LiveVault service but found that it was only good for file level backups. We liked how it backed up every 15 minutes. It doesn't support Exchange 2007-10 and the web interface was very poor. Who else is everyone using? The most notable names in online backup such as Mozy and Carbonite don't really seem suitable for larger companies. We have SQL, Exchange and Sharepoint servers and are looking to virtualize in the near future. Until then bare metal restore capability would be nice. We are currently using Backup Exec 12.5 but that can be so troublesome at times. We have about 2 TB of data. 1TB is archival data.

    Read the article

  • 404 when doing safe-upgrade in lucid 64 box?

    - by Millisami
    Why I see 404 when doing sudo aptitude safe-upgrade in my lucid 64 box? deploy@li167-251:~$ sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-threaded-dev apache2-utils apache2.2-bin apache2.2-common apt apt-utils base-files binutils bzip2 dpkg dpkg-dev gzip ifupdown krb5-multidev language-pack-en language-pack-en-base language-selector-common libatk1.0-0 libatk1.0-dev libavahi-client3 libavahi-common-data libavahi-common3 libbz2-1.0 libc-bin libc-dev-bin libc6 libc6-dev libc6-i686 libcups2 libfreetype6 libfreetype6-dev libglib2.0-0 libglib2.0-dev libgssapi-krb5-2 libgssrpc4 libgtk2.0-0 libgtk2.0-common libgtk2.0-dev libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev libmysqlclient-dev libmysqlclient16 libnotify-dev libnotify1 libpam-modules libpam-runtime libpam0g libparted0debian1 libpng12-0 libpng12-dev libpq-dev libpq5 libssl-dev libssl0.9.8 libtiff4 libudev0 libusb-0.1-4 linux-libc-dev mountall mysql-client mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openssh-client openssh-server openssl parted python-apt sudo tzdata udev upstart ureadahead wget xulrunner-1.9.2 xulrunner-1.9.2-dev The following packages are RECOMMENDED but will NOT be installed: colibri debhelper fakeroot hicolor-icon-theme libatk1.0-data libglib2.0-data libgtk2.0-bin libhtml-template-perl manpages-dev notification-daemon notify-osd ssl-cert xauth xfce4-notifyd 88 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 85.8MB of archives. After unpacking 1712kB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Get:1 http://security.ubuntu.com/ubuntu/ lucid-updates/main libpam-modules 1.1.1-2ubuntu5 [358kB] Get:2 http://security.ubuntu.com/ubuntu/ lucid-updates/main base-files 5.0.0ubuntu20.10.04.2 [70.2kB] Get:3 http://security.ubuntu.com/ubuntu/ lucid-updates/main gzip 1.3.12-9ubuntu1.1 [102kB] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc-bin 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6-i686 2.11.1-0ubuntu7.2 .........

    Read the article

  • How to restore one contact from Address Book with Time Machine

    - by doekman
    I want to restore one contact from my Address Book with Time Machine. To do so, I select the contact in Address Book. Then, I press the Time Machine icon in the dock. Then my address book is "taken into space". However, when I browse back in time (either pressing the arrow back, or selecting a time on the right), the contact details do not change. And I am sure the data has been changed between dates. Also, when I do press restore, it's still the new data, not the backup. Is this a bug, or am I doing something wrong? I'm using OS X 10.6.3 in combination with a external USB drive on an iMac.

    Read the article

  • Optimal Disk Setup for OLTP SQL Server

    - by Chris
    We have a high transaction (lots of reads and writes) database server (running SQL 2005) that is currently set up with a RAID 1 OS partition (C:) and a RAID 5 data/log/tempdb partition (D:). The C: has 2 drives and the D: has 4 drives. The server has around 300 databases ranging from 10MB to 2GB in size. I have been reading up on best practices for partioning the disks, but would like some opinions on our setup since we are so limited in the number of disks. It seems like RAID 10 is popular, but I dont think we could use it with only 6 total disks to work with. Thanks. Update I went with 3 RAID 1 Partitions (2 disks each) Partition 1: OS, TempDB, Backups Partition 2: Logs Partition 3: Data

    Read the article

  • Google Chrome doesn't stay logged in to Google sites when using pinned tabs

    - by Nick T
    Despite checking "stay logged in" or the like on Gmail or Docs, Chrome refuses to do so when I close and re-open it with Google sites pinned. If they're not pinned, it works fine. The "Clear cookies and other site and plug-in data when I close my browser" checkbox in the settings is not checked, and I don't have any cookie exceptions. All settings are defaults. Nor is the incognito mode being used. This occurs on all my computers using Chrome. I have deleted my cookies file (%userprofile%\AppData\Local\Google\Chrome\User Data\Default\Cookies) with no effect (other than losing the logins that ordinarly work fine). Of note is that when I relaunch Chrome with Gmail pinned and it asks me to log in, doing so once will fail (does nothing; no errors), then it will work on the second attempt. If I refresh the window before doing so, it will work on the first attempt.

    Read the article

  • Windows folder encryption

    - by Razor
    My situation I know that bitlocker is meant to encrypt whole drives, but I have an hard drive that is already fully partitioned and containing data. I'd like to encrypt part of one partition, leaving the rest of the partition accessible. I would very much like to avoid programs like Norton partition magic (which resize/split partitions), because every time I used them I had problems with the data stored. Question Is there any way/builtin alternative/3rd party app that integrates with windows login to encrypt one subset of a partition? EDIT I heard horror stories about EFS, which is why I don't want to use it, unless there have been improvements on reliability with windows 8. Some highlights from that article: In fact I’ve only used EFS twice in the last ten years on my own computers and on both occasions I’ve lost files and documents. I therefore cannot recommend you ever encrypt your files with this Windows feature. Unfortunately, because of incompatibilities with some differing versions of EFS files can end up scrambled and unrecoverable.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • how do i determine the image compression algorithm

    - by klijo
    i have a folder containing images. I need to determine the image compression algorithm used in them. Image format is TIFF. Is there a program that i can use to do this ? A program that runs on windows or Linux is ok. When i do a file it gives 100 (2).tif: TIFF image data, little-endian 100.tif: TIFF image data, little-endian It doesnt say which type of algorithm it uses. whether its lossy or lossless and the name of it ?

    Read the article

  • Which software to use for RAMDISK on Windows 2008?

    - by Tony_Henrich
    I am building a server machine with lots of RAM. At least 16G. I am planning to put my frequently read and written data in RAM so I am looking for software for creating RAM disks. This is for Windows Server 2008 R2 Standard 64bit. Any recommendations? I would like one where I can flush the disk image into persistent storage upon demand. For example when Windows shuts down. (I am aware of all the consequences of data loss when power is lost)

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • Minimize VirtualBox Hard Drive disk

    - by Aviv
    I have Ubuntu Server 10.04 TLS installed on a Virtual Machine in a VirtualBox. The size of the Hard Drive is dynamic growing hard drive and the maximum is 32GB. At the beginning i had 4GB on the Hard Drive and the size of the .vdi was 4GB. Lately the size of data on the disk is 15GB but the size of the .vdi is almost 32GB. Why is that? How can i pack / optimize / defrag the HD so it will be the same size of the data on the disk? Thanks.

    Read the article

  • How to remove SelectionLinks extension from Chrome on Windows?

    - by Faustas
    It's not possible to remove it using the standard extension disablement/removal features in Chrome - the checkbox is disabled. I also found that the extension gets installed under C:\Users\\AppData\Local\Google\Chrome\User Data\Default\Extensions\ - tried to delete it there, but it's still active. In fact, I tried deleting the whole C:\Users\\AppData\Local\Google\Chrome\User Data\ directory, but the next time Chrome starts, it recreates it and the extension gets recreated. Seems like there is something running in Windows that keeps detecting that Chrome extension is not there anymore and reinstates it. Any ideas how to get rid of it?

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • What is the easiest way to do a direct file transfer of an extremely large file over the Internet?

    - by Kenneth Cochran
    I would like to transfer a 20+ GB file to a friend. I would like it to: Be fast Ensure data integrity Not require opening ports in either end's firewall Be free Not broadcast the file's existence to everyone on the Internet I've looked a several technologies and nothing seems to fit: Gnutella, BitTorrent, et al. satisfies 1, 2 and 4 JetBytes... 1, 3, 4 and 5 Yahoo Messenger, AIM, etc. 3, 4 and 5 FTP, SFTP... 1?, 4 and 5 rsync... 1, 2, 4 and 5 For a file this size speed and data integrity are the most important. No one wants a 20 GB file to fail a MD5 check after spending two days downloading it. Is there anything that meets all these requirements?

    Read the article

  • Why does my excel document have 960,000 empty rows?

    - by C-dizzle
    I have an excel document, Office 2007, on a Windows 7 machine (if that part matters any, I'm not sure but just throwing it out there). It is a list of all employee phone numbers. If I need to generate a new page, I can click on page 2 and the table will automatically generate again. The problem is, someone messed it up since it's on a network drive and now shows I have over 960,000 rows of data, when I really don't! I did CTRL+END to see if any data was in the last cell, so I cleared it out, deleted that row and column, but still didn't fix it. It almost seems like it duplicates itself after the deletion. How can I fix this instead of recreating the entire document?

    Read the article

  • Getting Excel to handle CRLF's correctly in CSV

    - by Ben Fulton
    I am creating CSV files to be opened in Excel. The rows are separated by CRLF and that's fine, but some of the input data contains CRLF data in it as well. Per the usual standards, I surround them with quotes, but Excel doesn't seem to recognize the CR character and puts a little box with a question mark in it instead. I can strip the CR's out of the CSV file, but it seems like an unnecessary step. Is there an easy way to get Excel to recognize a CRLF inside a row of a CSV file?

    Read the article

  • Date based sum in Excel / Google Docs spreadsheets

    - by alumb
    I have a bunch of rows with a date and a dollar amount (expenses). I want to produce a list of the days of the month and what the balance of the expenses is. So, for example the 5th entry in the list would be 8/5/2008 and the sum of all the expenses that occurred on or before 8/5/2008. Approximately this is =sumif(D4:D30-A5,">0",E4:E30) but of course that doesn't work (where the source data is dates in D4:D30 and the expenses are in E4:E30). Notes source data can't be sorted for various reasons. must work in google spreadsheets, which is a fairly complete subset of excel's functions.

    Read the article

  • Forgot to unmount/eject external hard drive, lost moved files. Mac OS X

    - by balupton
    So I was using my Mac with my external hard drive connected via USB. I moved about 10 GB of data to it (via drag and drop while holding down the Command key to move the files rather than to copy them). They moved to the drive all right, but as I was having some issues and the Finder crashed after the transfer, I was unable to eject the volume and later everything froze so I had to do a hard restart (hold the power button). When I remounted the volume (plugged the external hard drive back in) it no longer had any of the files which I moved onto it. As it was a lot of data, how can I recover these files?

    Read the article

  • FreeBSD after motherboard replacement; should I have any concerns?

    - by cc
    So after three years my motherboard (Asus M2N-0MX) has died off. As I go shopping for i's replacement tomorrow I have a concern about the data that I currently have on the drives wtihin. I'm currently running BSD 6.2, and am wondering if there would be any concern with installing a new OS on that system, would it be better to jsut install the latest BSD version, and are their any pitfalls that I should watch for to make sure I don't end up losing 750gb's of data. The setup consists of the following(to the best of my knowledge): Pioneer DVD drive 3ware RAID card four 250gb SATA drives in RAID 5 config thanks to anyone that can offer some advice, or just to confirm if I am over thinking things.

    Read the article

  • Reverse Engineer Formula

    - by aaronls
    Are there any free programs or web services for reverse engineering a formula given a set of inputs and outputs? Consider if had 3 columns of data. The first two numbers are inputs, and the last one is an output: 3,4,7 1,4,5 4,2,6 The outputs could be produced with simply a+b, but there could be many formulas that would give the same result of course. I am talking about data without any error or deviation, and I think the formula would only need basic operations(divide, multiply, add, sutbract) and possibly use one of floor/ceiling/round.

    Read the article

  • Is an Adnroid-based Phone a Suitable MP3 Player for Music Streamed over the Internet?

    - by James McFarland
    I am considering getting an HTC phone running Android from Verizon Wireless when I next upgrade my phone. I also have an online account with a music vendor, where I have rights to listen to my collection, but not download the MP3s. Further, I have an unlimited data plan and Wi-Fi, so I have full access to bandwidth volume without any concerns. I am especially interested in mounting my phone in a car kit, and streaming my online music to my car's sound system while driving. If you are experienced in this scenario, or have tried this scenario - Is is reasonable to expect my HTC Android phone to provide me with streaming music via my cell data plan anywhere I get cell service?

    Read the article

  • I need some MySQL lookup table advice

    - by Gary Beam
    I have a MySQL database with about 200 tables. 50 of these are small 2-field 'id-data' lookup tables. Several of these DB's are hosted on a shared server. I have been informed that I need to reduce the total number of tables in the shared hosting environment because of performance issues relating to too many tables. My question is: Could/Should the 50 2-Field lookup tables be combined into a single 3-field table with 'id-field_name-data' Fields? Even if this can be done, I will have a lot of work to do on the PHP user application. My other choice is moving the DB's to a dedicated server at much higher hosting cost. I don't believe my 200 table DB's are actually causing any performance issues on this shared hosting server, at least not from the user application standpoint. There are never more than 10 of these tables joined in any single query; although I have seen some very-slow queries generated by phpmyadmin on these DB's.

    Read the article

< Previous Page | 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016  | Next Page >