Search Results

Search found 38640 results on 1546 pages for 'full table scan'.

Page 311/1546 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • New Success Story: McGrath RentCorp Improves Business Reporting and Analytics Capabilities with Cloud-based Business Intelligence Solution

    - by LanaProut
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} McGrath RentCorp worked with Jade Global, an Oracle Platinum Partner, to scope, design, and execute the deployment, using its Oracle Accelerate solution to jumpstart the process and accelerate the time to value. Click here to view the full story.

    Read the article

  • dhcp3-server (dhcpd) is tampering with host NIC

    - by user61000
    Hi all, I have a debian box that is serving as a router (using iptables NAT). When first turned on, everything works fine for a few minutes. Then the dhcp server assigns an IP (other than 192.168.0.1) to its' host NIC, eth0. This is NOT what I want. I just want dhcp3-server to listen on eth0, not assign it an IP, and changes the kernel routing table. This of course ruins the NAT capablities of the box. How can I tell the dhcp3-server NOT to do this? Thanks Before dhcp3-server tampers with eth0, the IP is 192.168.0.1, and the routing table looks like this: ~# netstat -r Kernel IP routing table Destination Gateway Iface 192.168.0.0 * eth0 173.33.220.0 * eth1 default 173.33.220.1 eth1 After dhcp3-server tampers with eth0, the IP is 192.168.0.3, and the routing table looks like this: ~# netstat -r Kernel IP routing table Destination Gateway Iface 192.168.0.0 * eth0 173.33.220.0 * eth1 default 192.168.0.1 eth0 default 173.33.220.1 eth1 SETUP Outbound NIC is eth1 Internal NIC is eth0 /etc/network/interfaces ... iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 /etc/default/dhcp3-server INTERFACES="eth0"

    Read the article

  • Corrupt mysql system tables

    - by psynnott
    I am having issues with the columns_priv table in the mysql system database. I cannot add new users currently. I have tried repairing it using mysqlcheck --auto-repair --all-databases --password but I get the following output: mysql.columns_priv Error : Incorrect file format 'columns_priv' error : Corrupt Is there any other way to repair this table, or how do I go about replacing it with a blank table? What would I lose by doing that? Thank you Edit (Additional Info) mysqld is currently using 100% cpu constantly. Looking at show processlist, I get: mysql> show processlist; +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | 5 | debian-sys-maint | localhost | mysql | Query | 1589 | Opening tables | ALTER TABLE tables_priv MODIFY Column_priv set('Select','Insert','Update','References') COLL | | | 752 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • How to make Windows 7 use the internet connection that I specify

    - by user138957
    I have a LAN adapter and a USB wireless internet connection. When both connected windows 7 always uses the USB. I tried changing the metric values but no luck. Let me explains the steps I took. Currently automatic metric on all adapters. LAN connected. ipconfig shows that it is connected to the correct ip/dns/gateway etc. IPv4 Route table shows Metric 24 Then connected USB. ipconfig shows USB connectivity then LAN in that order. Internet is now through USB. IPv4 Route table shows Metric 4249 for LAN and USB is 41. Gateway for USB shows "on-link". netstat -rn shows USBDEVICE on top. Changed LAN metric to 5 and now the route table shows LAN as 9 (not sure why it added 4) and USB as 41. netstat shows LAN then USB. ipconfig shows LAN then USB. But still connection is through USB. How do I know? Task manager shows utilization only through USB as well as speed is showing around 1mbps rather than LANs 10mbps. How can I get win7 use LAN while USB is connected. I am just trying to use USB as a backup just in case I lose LAN connection. Please help!! I thought i will make USB metric manually to say 10. But it says I have to reconnect for it to be effective. Currently USB still shows below LAN and still has 9 and 41 in the table. Disconnected USB. Table shows LAN metric as 24 (Not sure why it got changed from 9 and setting got reverted by to automatic) Reconnected USB. Now in the setting still shows 10 and the route table shows 11 for USB and LAN shows 4249 (settings shows 4245, 4 less)) For some reason restarting USB is resetting LAN setting when reconnected. Thanks

    Read the article

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Program to swap files between drives?

    - by josi
    Has anyone built a program/script to transfer files between 2 hard drives, but like if both are near full....so one copies 1 file over then the other copies the other file, then they delete the files that were copied? Kind of annoying, have a 6tb raid at about 4tb full, then 1 4.5tb basically full, can't really swap them easily....without doing many copies and deletes of files.... Anyone know a way to make them just swap? lol

    Read the article

  • How do you handle the task of changing the schema of a production MySQL database?

    - by Continuation
    One of the biggest complaints I have heard about MySQL is that it locks up a table if you try to change its schema like adding a column or adding an index. By "locking up the table" does it mean I can neither read nor write to the table? Sometimes for hours? That seems a pretty severe limitations. I was going to use MySQL for my new project but this gives me pause. Is there a workaround for this? How do you handle the task of changing the schema of your production MySQL database? By the way someone told me Postgresql doesn't have this problem. Is that true - I can both read and write to a Postgresql table while changing its schema? Is there any performance penalty incurred? Would love to hear your experiences.

    Read the article

  • external hard drive not detected after ubuntu crash [restarting machine]

    - by Netmoon
    today i tried to watch movie [with vlc media player] from my external hard drive [expansion portable 500 GB], and it's play good, then pause the movie and play it again, and my Ubuntu crashed! [Ubuntu 12.04]i had to restart the machine, so did it, but after that Ubuntu can't recognize this hard drive! i changed USB cable but it was not effective. this is my dmesg command result : [ 191.281630] usb 2-1.3: new full-speed USB device number 9 using ehci_hcd [ 191.353527] usb 2-1.3: device descriptor read/64, error -32 [ 191.529115] usb 2-1.3: device descriptor read/64, error -32 [ 191.704669] usb 2-1.3: new full-speed USB device number 10 using ehci_hcd [ 191.776524] usb 2-1.3: device descriptor read/64, error -32 [ 191.952202] usb 2-1.3: device descriptor read/64, error -32 [ 192.127772] usb 2-1.3: new full-speed USB device number 11 using ehci_hcd [ 192.534742] usb 2-1.3: device not accepting address 11, error -32 [ 192.606749] usb 2-1.3: new full-speed USB device number 12 using ehci_hcd [ 193.013696] usb 2-1.3: device not accepting address 12, error -32 [ 193.013906] hub 2-1:1.0: unable to enumerate USB device on port 3 note: I am able to hear the sound of the external drive lens. When attach hard drive to usb port, the status light goes on, but it's low bright and i think its power is low! try to mount in Microsoft Windows 7, but nothing happened.

    Read the article

  • Canon MP280 All in One prints but scanner not recognized

    - by Chris
    I recently installed Ubuntu 12.04 LTS after not using any Linux distro for several years. My Canon MP280 worked perfectly as far as printing goes and I was even able to set it up on a Samba share and print from a Windows 7 machine. However when I open Simple Scan it does not detect a scanner. I did find some instructions online but the scanner is still not recognized in Simple Scan even after installing a couple .deb packages from the Canon site. I also couldn't seem to find the ScanGear package after installing those .deb packages as well but I haven't had a chance to delve fully into it. Thanks for your time!

    Read the article

  • Do cross reference database tables have a place in domain driven design?

    - by Mike Cellini
    First some background. Let's say we have a system where a customer is placing an order in a web interface. The items that customer is ordering can priced in various ways. Sometimes including the cost of delivery and sometimes not at all. That pricing effectively depends on a variety of factors including the vendor's own pricing model, that vendor's individual contracts with customers as well as that vendor's contracts with its own suppliers. Let's assume that once a customer places an order for a particular item and chooses a contract if any, the method of delivery can be determined by variables on those contracts. Those delivery methods also live in their own table in the database and have various properties consumed downstream. It makes sense that a cross reference or lookup table would store that information. That table would be loaded into the domain and could then be used to apply the appropriate delivery method while processing the order. Does this make sense in the context of domain driven design? Or is my thinking too relational? Is this logic that should be built into it's own class/method (I mean beyond apply the cross reference table data)?

    Read the article

  • MySQL Unions/Subselects not utilizing keys from associated tables

    - by Brett
    I've noticed by doing EXPLAINs that when a MySQL union between two tables is used, mysql creates a temporary table, but the temp table does not use keys, so queries are slowed considerably. Here is an example: SELECT * FROM ( SELECT `part_number`, `part_manufacturer_clean`, `part_number_clean`, `part_heci`, `part_manufacturer`, `part_description` FROM `new_products` AS `a` UNION SELECT `part` as `part_number`, `manulower` as `part_manufacturer_clean`, `partdeluxe` as `part_number_clean`, `heci` as `part_heci`, `manu` as `part_manufacturer`, `description` as `part_description` FROM `warehouse` AS `b` ) AS `c` WHERE `part_manufacturer_clean` = 'adc' EXPLAIN yields this: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL (NULL) (NULL) (NULL) (NULL) 17206 Using where 2 DERIVED a ALL (NULL) (NULL) (NULL) (NULL) 17743 3 UNION b ALL (NULL) (NULL) (NULL) (NULL) 5757 (NULL) UNION RESULT <union2,3> ALL (NULL) (NULL) (NULL) (NULL) (NULL) In this case, part_manufacturer_clean and manulower are keys in both tables. When I don't use the subselects and union, and just use one table, everything works fine. I'm not sure if the issue is with the union or with the subselects. Is there any way to union two tables and still use keys/indexes for performance?

    Read the article

  • Cloud Strategy for Partners Announced at OOW

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle made a significant announcement about its Cloud Strategy for partners: Oracle has unveiled a comprehensive new set of Oracle Cloud partner programs and enablement resources that help partners to speed time to market with new cloud-based services and solutions and deliver increased value to customers. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} New Oracle PartnerNetwork Cloud offerings include an Oracle Cloud Referral Program, Oracle Cloud Specialization featuring RapidStart and Oracle Cloud Builder Specializations, Oracle Cloud Resale Program and Oracle Platform Services for Independent Software Vendors (ISVs).

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • Website (X)HTML Code Change Detection [closed]

    - by 0pt1m1z3
    I am looking for an enterprise-grade service or a tool that can be used to scan / fingerprint websites and notify when major XHTML code changes are detected. The tool should be able to continuously scan thousands of websites and determine the percentage of HTML code that has been modified since the last run. And then either save the data where it can be easily accessed or send periodic notifications. I know of services like ChangeDetect.com, but they don't do markup only changes and instead focus on everything, including content. We don't really care about presentation content, because a lot of sites we need to cover are updated frequently with content.

    Read the article

  • REST API wrapper - class design for 'lite' object responses

    - by sasfrog
    I am writing a class library to serve as a managed .NET wrapper over a REST API. I'm very new to OOP, and this task is an ideal opportunity for me to learn some OOP concepts in a real-life situation that makes sense to me. Some of the key resources/objects that the API returns are returned with different levels of detail depending on whether the request is for a single instance, a list, or part of a "search all resources" response. This is obviously a good design for the REST API itself, so that full objects aren't returned (thus increasing the size of the response and therefore the time taken to respond) unless they're needed. So, to be clear: .../car/1234.json returns the full Car object for 1234, all its properties like colour, make, model, year, engine_size, etc. Let's call this full. .../cars.json returns a list of Car objects, but only with a subset of the properties returned by .../car/1234.json. Let's call this lite. ...search.json returns, among other things, a list of car objects, but with minimal properties (only ID, make and model). Let's call this lite-lite. I want to know what the pros and cons of each of the following possible designs are, and whether there is a better design that I haven't covered: Create a Car class that models the lite-lite properties, and then have each of the more detailed responses inherit and extend this class. Create separate CarFull, CarLite and CarLiteLite classes corresponding to each of the responses. Create a single Car class that contains (nullable?) properties for the full response, and create constructors for each of the responses which populate it to the extent possible (and maybe include a property that returns the response type from which the instance was created). I expect among other things there will be use cases for consumers of the wrapper where they will want to iterate through lists of Cars, regardless of which response type they were created from, such that the three response types can contribute to the same list. Happy to be pointed to good resources on this sort of thing, and/or even told the name of the concept I'm describing so I can better target my research.

    Read the article

  • InnoDb Overhead?

    - by Rimary
    I just converted several large tables from MyISAM to InnoDB. When I view the tables in phpMyAdmin, they are showing a significant amount of overhead (One table has 6.8GB). Optimizing the tables (which isn't a supported command on InnoDB) has no affect like it does on MyISAM. Is this a result of InnoDB having the ever growing data file that never returns space even after deletes? If that's the case, I've never seen overhead like this before from other InnoDB tables. Is there a way to clean this up? Edit: Here are the things I've tried (with no success): Optimize Table Reorder table by primary key Defragment table

    Read the article

  • Rebuild the index with REINDEX [closed]

    - by kuttyarif
    WARNING: index "pk_alarmid" contains 1363436 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_uei_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_nodeid_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX.

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • You cannot do cross joins in SQL Azure but there is a way around that....

    - by SeanBarlow
    So I was asked today how to do cross joins in SQL Azure using Linq. Well the simple answer is you cant do it. It is not supported but there are ways around that. The solution is actually very simple and easy to implement. So here is what I did and how I did it. I created two SQL Azure Databases. The first Database is called AccountDb and has a single table named Account, which has an ID, CompanyId and Name in it. The second database I called CompanyDb and it contains two tables. The first table I named Company and the second I named Address. The Company Table has an Id and Name column. The Address Table has an Id and CompanyId columns. Since we cannot do cross joins in Azure we have to have one of the models preloaded with data. I simply put the Accounts into a List of accounts and use that in my join.   var accounts = new AccountsModelContainer().Accounts.ToList(); var companies = new CompanyModelContainer().Companies; var query = from account in accounts             join company in                 (                       from c in companies                      select c                  ) on account.CompanyId equals company.Id             select new AccountView() {                                               AccountName = account.Name, CompanyName = company.Name,                                 Addresses = company.Addresses                         }; return query.ToList();   So as long as you have your data loaded from one of the contexts you can still execute your queries and get the data back that you want.

    Read the article

  • How compilers know about other classes and their properties?

    - by OnResolve
    I'm writing my first programming language that is object orientated and so far so good with create a single 'class'. But, let's say I want to have to classes, say ClassA and ClassB. Provided these two have nothing to do with each other then all is good. However, say ClassA creates a ClassB--this poses 2 related questions: -How would the compiler know when compiling ClassA that ClassB even exists, and, if it does, how does it know it's properties? My thoughts thus far had been: instead of compiling each class at a time (i.e scan, parse and generate code) each "file (not really file, per se, but a "class") do I need to scan + parse each first, then generate code for all?

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • MySQL reclaim index space after large delete?

    - by cdunn
    After performing a large delete in MySQL, I understand you need to run a NULL ALTER to reclaim disk space, is this also true for reclaiming index space? We have tables using 10G of index space and have deleted/archived large chunks of this data and unsure if we need to rebuild the table in order to decrease the size of the index. Can anyone offer any advice? We are trying to avoid rebuilding the table since it would take quite awhile and lock the table. Thanks!

    Read the article

  • Generating CMakeLists.txt

    - by vanna
    I got a bunch of C++ sources files and headers. They may use external libraries such as Boost e.g. I am interested in the process of building binaries for Windows and *nix. Makefiles (*nix) and .vcproj (Windows) call compilers with some specifications such as the order of compilation, compilation options and stuff. CMakeLists.txt can be used by CMake to build either makefiles or .vcproj and use very helpful commands such as recursive search of files, automatic linkage with known libraries, installers, variables that can be used in source files... Is there any existing tool that would generate a CMakeLists.txt from specified options ? Options could be like : scan this folder and make a library out of it, then scan this other folder and make an executable and automatically link both with Boost as well along with a user friendly installer with generated INSTALL.txt and README.txt. Something very powerful like that.

    Read the article

  • Does "I securely erased my drive" really work with Truecrypt partitions?

    - by TheLQ
    When you look at Truecrypt's Plausible Deniability page it says that one of the reasons for partition with solely random data is that you securely erased your drive. But what about the partition table with full disk encryption? How can you explain why the partition table says there's a partition of unknown type (With my limited knowledge of partition tables I think that they store all the partition filesystem types) and with solely random data? It seems that if your going to securely erase the drive you would destroy everything, including the partition table. And even if you just wiped the partition, the partition table would still say that the partition was originally NTFS, which it isn't anymore. Does the "I securely erased my drive" excuse still work here? (Note: I know that there's hidden truecrypt volumes, but I'm avoiding them due to the high risk of data loss)

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >