Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 1022/2386 | < Previous Page | 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029  | Next Page >

  • Chrome browser caching

    - by Kyle B.
    I do a lot of development on my local machine and would like to start using Chrome, however I cannot seem to do a hard-refresh (ctrl+f5) or any other key combination to get my browser to forcibly refresh all content @ http://localhost. I change projects frequently in IIS and this presents a problem because I see stylesheet and image data from my previous project with no way to get this page to reload without forcibly dumping all cache data from the settings menu. Is there another key combination I am missing, or is there a place I can (on a site by site basis) turn off caching? I prefer not to have to clear out my temporary files in the browser settings as I switch projects frequently. Thanks, Kyle

    Read the article

  • Simple SQL Server 2005 Replication - "D-1" server used for heavy queries/reports

    - by Ricardo Pardini
    Hello. We have two SQL 2005 machines. One is used for production data, and the other is used for running queries/reports. Every night, the production machine dumps (backups) it's database to disk, and the other one restores it. This is called the D-1 process. I think there must be a more efficient way of doing this, since SQL 2005 has many forms of replication. Some requirements: 1) No need for instant replication, there can be (some) delay 2) All changes (including schemas, data, constraints, indexes) need to be replicated without manual intervention 3) It is used for a single database only 4) There is a third server available if needed 5) There is high bandwidth (gigabit ethernet) available between the servers 6) There isn't a shared storage (SAN) available What would be a good alternative to this daily backup/restore routine? Thanks!

    Read the article

  • Messages stuck in SMTP queue - Exchange 2003

    - by Diav
    I need your help people ;-) I have a problem with messages coming into our Exchange Server and ones going out through it. Basically, the messages are stuck in the SMTP queue. A message will come into the server, I can see it listed under "Exchange System Manager", but if you list the properties of the message queue it says something like 00:10 SMTP Message queued for local delivery 00:10 SMTP Message delivered locally to [email protected] 00:10 SMTP Message scheduled to retry local delivery 00:11 SMTP Message delivered locally to [email protected] 00:11 SMTP Message scheduled to retry local delivery etc etc For outgoing message list looks like this: 10:55 SMTP: Message Submitted to Advanced Queuing 10:55 SMTP: Started Message Submission to Advanced Queue 10:55 SMTP: Message Submitted to Categorizer 10:55 SMTP: Message Categorized and Queued for Routing 10:55 SMTP: Message Routed nad Queued for Remote Delivery And the end - since then status didn't change, message is in queue, I am forcing connection from time to time but without an effect. I checked connection with smarthost (used telnet for that) and everything seems to work correctly, so the problem is probably on exchange side. I am using Exchange Server 2003 running on Small Business Server 2003. I don't have any antivirus installed on server. Remaining free space on each partition is over 3Gb, on partition with data bases - it is over 12Gb. All was working good and without problems since 2005, problems started in half of this june - messages started going out and being stuck almost randomly (I don't see a pattern yet, some are going out, some are not, some are going after several hours). I don't know what to do, what to check more, so please, any ideas? Best regards, D. edit Priv1.edb has 14,5GB and priv1.stm 2,6GB - together those files have more than 16GB - can it be the reason? If yes, then what? Indeed, I haven't thought that it can have something in common with my problem, but several users reported recent problems with Outlook Web Access - they can log in, they see the list of their mails, but they can't see the content of their emails. Although when they are connecting with Outlook 2003/2007 - there is no such problem, only with OWA there is. edit2 So,.. It works now, and I have to admit that I am not really sure what the problem was (hope it won't come back). What have I done: Cleaned up some mailboxes to reduce size of them Dismounted Information Store Defragmentated data base files ( I used eseutil: c:\program files\exchsrvr\bin eseutil /d g:\data base\Exchsrvr\MDBDATA\priv1.edb ) Mounted Information Store back ..and before I managed to do anything else - my queue started moving, elements which were kept there already for days - started moving and after few minutes everything was sent, both, outside and locally. But: priv1.edb is still big (13 884 203 008), priv1.stm as well (2 447 384 576), so this is probably not the issue of size of the file. And if not this, so what was that? And if that was issue of size of the file, then soon it will repeat - is there something I can do to avoid it ?

    Read the article

  • Importing multiline cells from csv file into excel

    - by Unreason
    I have a csv file (comma delimited and quoted). When csv file is opened directly from explorer excel correctly interprets the cells that are mutliline, but it messes up the character encoding (utf-8). Therefore I have to use import function (Data/Get External Data/From Text). However, when I use import text function in excel (where I can set file encoding explicitly) it interprets the newline as start of the new row instead of putting multiline text into a single cell and breaks the file layout. Can I somehow overcome the situation by either forcing the explorer open command to use 65001: Unicode (UTF-8) encoding forcing the Text Import Wizard to ignore quoted line breaks as record delimiters

    Read the article

  • Zabbix - Some of the monitored items don't refreh

    - by Niro
    I'm experiencing a strange issue with Zabbix monitoring a MySQL server. Most of the data from the server such as MySQL queries per second and MySQL uptime , Buffers memory etc. update nicely while some data like CPU iowait time (avg1) , Host local time ,MySQL number of threads and other items which were monitored in the past has last check time of about a week ago. I can't find any logic in this, for example Mysql number of threads and Mysql queries per second are obtained in a similar way so it does not make sense one of them is monitored and one is not. Please help- how can I fix this? Update - I used zabbix_get from the zabbix server to check one of the items on the zabbix client and it works so the problem must be on the zabbix server side

    Read the article

  • One domain hiding two servers

    - by George DSeas
    For our SaaS web-app we have two identical servers in two geographically separated data centers. FOO_1 is the production server and does real-time (MySQL master-slave) replication to its backup F00_2. We want our users to always go to THEFOO.COM which somehow points to the production server. So even if FOO_1 dies, we can just switch THEFOO.COM to redirect to FOO_2 so the failure is transparent. This switch can be manual or automatic but without failback (if FOO_1 somehow becomes available again). Is there a way to do this with DNS? I am getting stuck with ANAME and CNAMEs configuration. We don't use sub-domains, just straight domains. If not, what are other options? Does it make sense to just have a web server at LOVELY_FOO.COM and just redirect all traffic? I also looked at load balancers but didn't see a solution for across data centers/network providers.

    Read the article

  • TCP Zero Window with no corresponding Window Update

    - by Gandalf
    I am trying to debug a network issue and am using Wireshark and tcpdump to grab packets from my server. I have one client application that is grabbing all my available connections and then holding them, trying to send A LOT of data and essentially causing an unintentional DOS attack. While debugging I notice that I see my server sending "Window Closed" and "Zero Window" TCP packets - but never sending any "Window Update" packets. I am guessing this is why the client never lets go of the connections (it still has more data to send and is waiting). Has anyone ever seen this type of behavior before? Let's not get into the reasons why I haven't set up an iptables rule to limit concurrent connections (yeah I know). I also recently changed the MTU from 1500 to 9000 - could this have such a negative effect? (Linux) Thanks.

    Read the article

  • In SSIS Convert European Currency Format to United States Currency Format

    - by Rob
    I have an interesting problem. I have an SSIS package that processes account data. We are now processing files from Europe. These files are in a CSV format using text qualifiers. For an example of the problem: In the United States the currency format is 123456.99 (We purposely leave the thousands separator out). The files sent from Europe are coming in with two formats. One is 123456,99 and the other is 123.456,00. SSIS is attempting to parse the text file and place it into a NUMERIC(20,2) field. This causes a parsing error in SSIS even with the text qualifiers. If I change the field to CURRENCY it sends a conversion error. I would like for SSIS to deal with this directly without requiring the data to be in the United States format. Has anyone had this problem? Any help will be greatly appreciated. Rob

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • htaccess password protection error

    - by nute
    I have an HTACCESS as follows: AuthUserFile /home/nasht00/.htmydomain AuthName "EnterPassword" AuthType Basic Require valid-user When I try it, the password pop-up appears. Whatever I enter in it, I get a 500 Internal Server Error. My password file is under /home/nasht00/.htmydomain . Its owner is nasht00:www-data (nasht00 is my user, www-data is the group that apache2 belongs to). File permissions on that file is 775. What am I missing? If I try without the htaccess it works fine of course. I have Ubuntu 9.10 with apache2.

    Read the article

  • Issues connection to Ubuntu via PuTTy

    - by user1787262
    I'm not sure this is the appropriate stack exchange site to post this question on. If not, please flag this for migration. I am trying to use PuTTy ssh into my ubuntu machine which is wirelessly connected to the same network. I originally ran ifconfig to get my ubuntu machines private network IP address. I then verified that ssh was running, I even ssh'd into my school network and then into the ubuntu machine itself. No problems yet. On my windows 8 machine I ran ipconfig to get my private network IPv4 address. I then pinged my ubunty machines IP and 100% of packets were received. I figured, "OK we are ready to use PuTTy to connect to my Ubuntu Machine". Keep in mind this was my first time using PuTTy. I tried entering the IP of my ubuntu machine in the PuTTy Config GUI but I got a connection timeout. At this moment I don't really know what's going on, SSH is running on port 22 of my Ubuntu machine and I can ping the machine why is it not connecting? (I tried [username]@ip too). So I went on my Ubuntu machine and ran nmap -sP 192.168.0.1/24 and found that my windows machines IP did not show up, the host is down. I'm at a lost in something I am not very familiar with. Would anyone be able to help me or direct me to some resources that would trouble shoot my problem? Thank you EDIT (ADDITION): tyler@tyler-Aspire-5250:~$ nmap -v 192.168.0.123 Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 01:56 MDT Initiating Ping Scan at 01:56 Scanning 192.168.0.123 [2 ports] Completed Ping Scan at 01:56, 3.00s elapsed (1 total hosts) Nmap scan report for 192.168.0.123 [host down] Read data files from: /usr/bin/../share/nmap Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn Nmap done: 1 IP address (0 hosts up) scanned in 3.14 seconds tyler@tyler-Aspire-5250:~$ nmap -Pn 192.168.0.123 Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 01:56 MDT Nmap scan report for 192.168.0.123 Host is up (0.022s latency). Not shown: 998 filtered ports PORT STATE SERVICE 2869/tcp open icslap 5357/tcp open wsdapi Nmap done: 1 IP address (1 host up) scanned in 72.51 seconds

    Read the article

  • Defragment an Exchange Volume

    - by IceMage
    The Scenario: I use a dedicated volume (RAID volume) to store all of my data for my Exchange 2007 server. Today, out of curiosity, I decided to check up on how fragmented the files on this data volume were. To my surprise, the answer is extremely. So, a three part question: First and Foremost, SHOULD I defragment this volume (after a full backup of course)? Be specific as to why not if I should not, or reasons I absolutely should if I should. Second, about how much time should I allow for during this maintenance period per gigabyte. The drives are all 7200 RPM SATA drives on a Hardware RAID 5 controller (Perc 5i/6i, can't remember), the files are extremely fragmented. (Over 5000 file fragments per gigabyte). Third, is there something wrong here? It seems to me like the drive shouldn't be this fragmented. Could something be configured incorrectly that could be causing this to happen?

    Read the article

  • Merging Two KML Files to Display Them with Different Marker Icons on Google Maps

    - by Maxim Z.
    Let's say that I have two spreadsheets with addresses. I uploaded these spreadsheets into Google Fusion Tables, geocoded the addresses, and exported the results as KML files. Now, I want to take these two KML files and merge them, while maintaining the location data and using it to map the points with Google Maps. Well, I found a way to easily merge the KML files: import both of them into a "My Maps" map with Google Maps! However, my problem is this: when I do that, all of the locations in my data have the same marker icon on the map. From past experience, I know that these markers can be somehow defined inside the KML files. Is it possible to combine these two KML files while giving one's points one marker icon and the other's points another marker icon? Just in case my question is confusing, what I mean, is giving the first set of points blue markers, for example, and the other set of points red markers, so that they can be overlayed.

    Read the article

  • Auto-Attach EBS-volume to a New Spot Instance?

    - by Jeff
    I am experimenting with EC2 spot instances, and am needing some data to be retained between terminations. Now as I understand it, when the current price goes above my max. bid, it will be automatically terminated. I assume any init scripts I have will be run on shutdown so I can push data off to the EBS before unmounting. My question is, how can I automatically mount the same EBS volume on the new spot instance once the price goes down, since it won't have any of my init scripts that I would've loaded onto the root volume the first time? Do I have to create a custom AMI, or is there some other way to achieve this?

    Read the article

  • How to view special characters in SQL Management Studio

    - by B Z
    Sql 2005 I have a text column that has special characters stored e.g. CR, LF, but I don't know what they are. I would like to view these characters in management studio. Something like in Notepad ++ Show Symbol Show All Characters. My Goal: I am working on a data conversion from one database to another. When the data is converted and viewed in the native application it is displaying some funky characters like a pipe character. I would like to eliminate these characters during the conversion process.

    Read the article

  • What would happen in a Software Raid 1 of one HDD and one SSD?

    - by Adrian Grigore
    Hi, I'm running my Windows 7 installation and all of my apps from an SSD for performance reasons. Since SSD's can instantly die at any moment, I'm looking for some kind of data backup strategy. Right Now I regularly backing up the drive image on a hard disk, but that only happens once per day, which is not enough for my taste. So I got an idea: What if I created a software raid 1 of the SSD and partition on my Hard disk? All data would be mirrored on both drives, making this a lot safer. But what about performance? Will Windows 7 detect that the SSD is faster than the hard drive and always read from the SSD? Or will it randomly read from both, thus reducing read performance? Thanks, Adrian Edit: I just found this article which basically answers my question. Feel free to close this post.

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • Ubuntu and Windows 8 shared partition gets corrupted

    - by Bruno-P
    I have a dual boot (Ubuntu 12.04 and Windows 8) system. Both systems have access to an NTFS "DATA" partition which contains all my images, documents, music and some application data like Chrome and Thunderbird Profiles which used by both OS. Everything was working fine in my Dual boot Ubuntu/Windows 7, but after updating to Windows 8 I am having a lot of troubles. First, sometimes, I add some files from Ubuntu into my DATA partition but they don't show up in Windows. Sometimes, I can't even use the DATA partition from Windows. When I try to save a file it gives an error "The directory or file is corrupted or unreadable". I need to run checkdisk to fix it but after some time, same error appears. Before upgrading to Windows 8 I also installed a new hard drive and copied the old data using clonezilla (full disk clone). Here is the log of my last chkdisk: Chkdsk was executed in read/write mode. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is DATA. CHKDSK is verifying files (stage 1 of 3)... Deleted corrupt attribute list entry with type code 128 in file 67963. Unable to find child frs 0x12a3f with sequence number 0x15. The attribute of type 0x80 and instance tag 0x2 in file 0x1097b has allocated length of 0x560000 instead of 0x427000. Deleted corrupt attribute list entry with type code 128 in file 67963. Unable to locate attribute with instance tag 0x2 and segment reference 0x1e00000001097b. The expected attribute type is 0x80. Deleting corrupt attribute record (128, "") from file record segment 67963. Attribute record of type 0x80 and instance tag 0x3 is cross linked starting at 0x2431b2 for possibly 0x20 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x3 in file 0x1791e is already in use. Deleting corrupt attribute record (128, "") from file record segment 96542. Attribute record of type 0x80 and instance tag 0x4 is cross linked starting at 0x6bc7 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x4 in file 0x17e83 is already in use. Deleting corrupt attribute record (128, "") from file record segment 97923. Attribute record of type 0x80 and instance tag 0x4 is cross linked starting at 0x1f7cec for possibly 0x5 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x4 in file 0x17eaf is already in use. Deleting corrupt attribute record (128, "") from file record segment 97967. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x441bd7f for possibly 0x9 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x32085 is already in use. Deleting corrupt attribute record (128, "") from file record segment 204933. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4457850 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x320be is already in use. Deleting corrupt attribute record (128, "") from file record segment 204990. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4859249 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3726b is already in use. Deleting corrupt attribute record (128, "") from file record segment 225899. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x485d309 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3726c is already in use. Deleting corrupt attribute record (128, "") from file record segment 225900. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48a47de for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37286 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225926. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48ac80b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37287 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225927. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48ae7ef for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37288 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225928. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48af7f8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3728a is already in use. Deleting corrupt attribute record (128, "") from file record segment 225930. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x48c39b6 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37292 is already in use. Deleting corrupt attribute record (128, "") from file record segment 225938. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x495d37a for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x372d7 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226007. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d0bd38 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x372dc is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226012. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4c2d9bc for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x372ed is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226029. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4a4c1c3 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37354 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226132. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4a8e639 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37376 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226166. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4a8f6eb for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37379 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226169. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ae1aa8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37391 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226193. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4b00d45 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x37396 is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226198. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4b02d50 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3739c is already in use. Deleting corrupt attribute record (128, "") from file record segment 226204. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4b3407a for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373a8 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226216. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4bd8a1b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373db is already in use. Deleting corrupt attribute record (128, "") from file record segment 226267. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4bd9a28 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373dd is already in use. Deleting corrupt attribute record (128, "") from file record segment 226269. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4c2fb24 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x373f3 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226291. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cb67e9 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37424 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226340. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cba829 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37425 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226341. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cbe868 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37427 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226343. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cbf878 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37428 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226344. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cc58d8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742a is already in use. Deleting corrupt attribute record (128, "") from file record segment 226346. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ccc943 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742b is already in use. Deleting corrupt attribute record (128, "") from file record segment 226347. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd199b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742d is already in use. Deleting corrupt attribute record (128, "") from file record segment 226349. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd29a8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3742f is already in use. Deleting corrupt attribute record (128, "") from file record segment 226351. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd39b8 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37430 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226352. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd49c8 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37432 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226354. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cd9a16 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37435 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226357. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cdca46 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37436 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226358. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ce0a78 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37437 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226359. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ce6ad9 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743a is already in use. Deleting corrupt attribute record (128, "") from file record segment 226362. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cebb28 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743b is already in use. Deleting corrupt attribute record (128, "") from file record segment 226363. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4ceeb67 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743d is already in use. Deleting corrupt attribute record (128, "") from file record segment 226365. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cf4bc6 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x3743e is already in use. Deleting corrupt attribute record (128, "") from file record segment 226366. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cfbc3a for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37440 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226368. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4cfcc48 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37442 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226370. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d02ca9 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37443 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226371. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d06ce8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37444 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226372. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d9a608 for possibly 0x2 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x37449 is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226377. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d844ab for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x3744b is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226379. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d6c32b for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x3744c is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226380. Attribute record of type 0xa0 and instance tag 0x5 is cross linked starting at 0x4d2af25 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0xa0 and instance tag 0x5 in file 0x3744e is already in use. Deleting corrupt attribute record (160, $I30) from file record segment 226382. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d0fd78 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x37451 is already in use. Deleting corrupt attribute record (128, "") from file record segment 226385. Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x4d16ef8 for possibly 0x1 clusters. Some clusters occupied by attribute of type 0x8 Can anyone help? Thank you

    Read the article

  • Selective Disable APC caching

    - by Victor
    I installed APC on my VPS and it works great with W3 Cache wordpress plugin. My problem is that there is one database in MySQL which is pinged by client end every few seconds to see if there are new updates. These db contains certain time sensitive information and hence it can't be part of cached data. How can I disable APC for this database/files? or Can I set a very short expiry of certain type of data? Any help is highly appreciated.

    Read the article

  • Netdom to restore machine secret

    - by icelava
    I have a number of virtual machines that have not been switched on for over a month, and some others which have been rolled back to an older state. They are members of a domain, and have expired their machine secrets; thus unable to authenticate with the domain any longer. Event Type: Warning Event Source: LSASRV Event Category: SPNEGO (Negotiator) Event ID: 40960 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: The Security System detected an authentication error for the server ldap/iceland.icelava.home. The failure code from authentication protocol Kerberos was "The attempted logon is invalid. This is either due to a bad username or authentication information. (0xc000006d)". For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c000006d Event Type: Warning Event Source: LSASRV Event Category: SPNEGO (Negotiator) Event ID: 40960 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: The Security System detected an authentication error for the server cifs/iceland.icelava.home. The failure code from authentication protocol Kerberos was "The attempted logon is invalid. This is either due to a bad username or authentication information. (0xc000006d)". For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c000006d Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 3210 Date: 14/05/2009 Time: 10:24:54 AM User: N/A Computer: TFS2008WDATA Description: This computer could not authenticate with \\iceland.icelava.home, a Windows domain controller for domain ICELAVA, and therefore this computer might deny logon requests. This inability to authenticate might be caused by another computer on the same network using the same name or the password for this computer account is not recognized. If this message appears again, contact your system administrator. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: c0000022 So I try to use netdom to re-register the machine back to the domain C:\Documents and Settings\Administrator>netdom reset tfs2008wdata /domain:icelava /UserO:enterpriseadmin /PasswordO:mypassword Logon Failure: The target account name is incorrect. The command failed to complete successfully. But have not been successful. I wonder what else needs to be done?

    Read the article

  • Raid HDD Boot up

    - by user234695
    My server is Power Edge 1950 running server 2008 32Bit, using 3 Physical SAS HDD as a Virtual 2 Disk configured for RAID5, with partition of C drive as OS and D drive as Data. Planning to format and install Server 2008 R2 64Bit so I insert a New Physical disk and configured as RAID5 and clone the C drive and D drive to the new hard disk. Now I need to test that the new hard disk is able to boot windows and work as expected. How do I test, I am not able to boot the windows by choosing the new harddisk, the bios show only the existing HDD, not the new one. Also, if I remove the old three hard disk, and leave the new harddisk and then am I able to boot the device, if I do this does my existing RAID5 configuration and data on the hard disk still remain.

    Read the article

  • Hard Disk Space Changes

    - by Write.
    I am currently running on Windows 7 x64, and have observe that my hard disk space is acting a little weird. Currently, my harddisk has 3 partitions, C:, D:, E:. Previously, before I delete a huge folder (30gb of data) from my D: drive, my C: drive has about 1gb left, while my E: drive has about 5 gb left. After deleting the 30gb of data (from D: drive), my space in D: drive has been recovered (but not sure if it's fully recovered), my C: drive which only had about 1gb left increased to 3. While my E: drive which had 5gb left dropped to 1. I was wondering if it has something to do with the fragmentations and whatsoever I always hear about in harddisk. Has anyone encountered similar issues or have an explanation to why it could be happening?

    Read the article

  • Does a router have a receiving range?

    - by Aadit M Shah
    So my dad bought a TP-Link router (Model No. TL-WA7510N) which apparently has a transmitting range of 1km; and he believes that it also has a receiving range of 1km. So he's arguing with me that the router (which is a trans-receiver) can communicate with any device in the range of 1km whether or not that device has a transmitting range of 1km. To put it graphically: +----+ 1km +----+ | |------------------------------------------------->| | | TR | | TR | | | <----| | +----+ 100m+----+ So here's the problem: The two devices are 1km apart. The first device has a transmitting range of 1km. The second device only has a transmitting range of 100m. According to my dad the two devices can talk to each other. He says that the first device has a transmitting and a receiving range of 1km which means that it can both send data to devices 1km away and receive data from devices 1km away. To me this makes no sense. If the second device can only send data to devices 100m away then how can the first device catch the transmission? He further argues that for bidirectional communication both the sender and the reciver should have overlapping areas of transmission: According to him if two devices have an overlapping area of transmission then they can communicate. Here neither device has enough transmission power to reach the other. However they have enough receiving power to capture the transmission. Obviously this makes absolutely no sense to me. How can a device sense a transmission which hasn't even reached it yet and go out, capture it and bring it back it. To me a trans-receiver only has a transmission power. It has zero receiving power. Hence for two devices to be able to communicate bidirectionally, the diagram should look like: Hence, from my point of view, both the devices should have a transmission range far enough to reach the other for bidirectional communication to be possible; but no matter how much I try to explain to my dad he adamantly disagrees. So, to put an end to this debate once and for all, who is correct? Is there even such a thing as a receiving range? Can a device fetch a transmission that would otherwise never reach it? I would like a canonical answer on this.

    Read the article

  • Why my pendrive is showing used space when it is actually empty?

    - by Deb
    I recently got some virus in my pendrive. I removed them using Microsoft security Essential and cleared all data. But it is showing that 47.1MB of space is being used. Is it due to the virus affect? There is no hidden files on the pendrive and I have also set Show hidden files in the Folder and search option. Permission is also granted to everyone. I also use disk management to check any issue but nothing found. I think formatting will solve the the problem but in that case I will not be able to see the data. Is there any command to see the files in cmd or any other way?

    Read the article

< Previous Page | 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029  | Next Page >