Search Results

Search found 296 results on 12 pages for 'optimizer'.

Page 4/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • Wordpress OptimizePress (Theme) error when creating new page

    - by user594777
    I just installed WordPress newest version, also installed OptimizePress Theme. I am getting the following error when trying to add a new page in Word Press. Any help would be appreciated. Warning: mkdir() [function.mkdir]: Permission denied in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php on line 1578 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php on line 1581 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php on line 1584 Warning: mkdir() [function.mkdir]: Permission denied in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clsblogfields.php on line 174 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clsblogfields.php on line 177 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clsblogfields.php on line 180 Warning: mkdir() [function.mkdir]: Permission denied in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clslpcustomfields.php on line 1725 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clslpcustomfields.php on line 1728 Warning: mkdir() [function.mkdir]: No such file or directory in /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clslpcustomfields.php on line 1731 Warning: Cannot modify header information - headers already sent by (output started at /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php:1578) in /home/admin/domains/mywebsite.com/public_html/wp-includes/functions.php on line 830 Warning: Cannot modify header information - headers already sent by (output started at /home/admin/domains/mywebsite.com/public_html/wp-content/themes/OptimizePress/admin/clscustomfields.php:1578) in /home/admin/domains/mywebsite.com/public_html/wp-includes/functions.php on line 831

    Read the article

  • Optimizing simple search script in PowerShell

    - by cc0
    I need to create a script to search through just below a million files of text, code, etc. to find matches and then output all hits on a particular string pattern to a CSV file. So far I made this; $location = 'C:\Work*' $arr = "foo", "bar" #Where "foo" and "bar" are string patterns I want to search for (separately) for($i=0;$i -lt $arr.length; $i++) { Get-ChildItem $location -recurse | select-string -pattern $($arr[$i]) | select-object Path | Export-Csv "C:\Work\Results\$($arr[$i]).txt" } This returns to me a CSV file named "foo.txt" with a list of all files with the word "foo" in it, and a file named "bar.txt" with a list of all files containing the word "bar". Is there any way anyone can think of to optimize this script to make it work faster? Or ideas on how to make an entirely different, but equivalent script that just works faster? All input appreciated!

    Read the article

  • [MySQL] Optimize Query

    - by bordeux
    Hello. I have problem with optimize this query: SET @SEARCH = "dokumentalne"; SELECT SQL_NO_CACHE `AA`.`version` AS `Version` , `AA`.`contents` AS `Contents` , `AA`.`idarticle` AS `AdressInSQL` , `AA` .`topic` AS `Topic` , MATCH (`AA`.`topic` , `AA`.`contents`) AGAINST (@SEARCH) AS `Relevance` , `IA`.`url` AS `URL` FROM `xv_article` AS `AA` INNER JOIN `xv_articleindex` AS `IA` ON ( `AA`.`idarticle` = `IA`.`adressinsql` ) INNER JOIN ( SELECT `idarticle` , MAX( `version` ) AS `version` FROM `xv_article` WHERE MATCH (`topic` , `contents`) AGAINST (@SEARCH) GROUP BY `idarticle` ) AS `MG` ON ( `AA`.`idarticle` = `MG`.`idarticle` ) WHERE `IA`.`accepted` = "yes" AND `AA`.`version` = `MG`.`version` ORDER BY `Relevance` DESC LIMIT 0 , 30 Now, this query using ^ 20 seconds. How to optimize this? EXPLAIN gives this: 1 PRIMARY AA ALL NULL NULL NULL NULL 11169 Using temporary; Using filesort 1 PRIMARY ALL NULL NULL NULL NULL 681 Using where 1 PRIMARY IA ALL accepted NULL NULL NULL 11967 Using where 2 DERIVED xv_article fulltext topic topic 0 1 Using where; Using temporary; Using filesort This is example server with my data: user: bordeux_4prog password: 4prog phpmyadmin: http://phpmyadmin.bordeux.net/ chive: http://chive.bordeux.net/

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • Is the C++ compiler optimizer allowed to break my destructor ability to be called multiple times?

    - by sharptooth
    We once had an interview with a very experienced C++ developer who couldn't answer the following question: is it necessary to call the base class destructor from the derived class destructor in C++? Obviously the answer is no, C++ will call the base class destructor automagically anyway. But what if we attempt to do the call? As I see it the result will depend on whether the base class destructor can be called twice without invoking erroneous behavior. For example in this case: class BaseSafe { public: ~BaseSafe() { } private: int data; }; class DerivedSafe { public: ~DerivedSafe() { BaseSafe::~BaseSafe(); } }; everything will be fine - the BaseSafe destructor can be called twice safely and the program will run allright. But in this case: class BaseUnsafe { public: BaseUnsafe() { buffer = new char[100]; } ~BaseUnsafe () { delete[] buffer; } private: char* buffer; }; class DerivedUnsafe { public: ~DerivedUnsafe () { BaseUnsafe::~BaseUnsafe(); } }; the explicic call will run fine, but then the implicit (automagic) call to the destructor will trigger double-delete and undefined behavior. Looks like it is easy to avoid the UB in the second case. Just set buffer to null pointer after delete[]. But will this help? I mean the destructor is expected to only be run once on a fully constructed object, so the optimizer could decide that setting buffer to null pointer makes no sense and eliminate that code exposing the program to double-delete. Is the compiler allowed to do that?

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • Review of ComponentOne Silverlight Controls (Free License Giveaway).

    - by mbcrump
    ComponentOne has several great products that target Silverlight Developers. One of them is their Silverlight Controls and the other is the XAP Optimizer. I decided that I would check out the controls and Xap Optimizer and feature them on my blog. After talking with ComponentOne, they agreed to take part in my Monthly Silverlight giveaway. The details are listed below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of ComponentOne Silverlight Controls + XAP Optimizer! (the winner also gets a license to Silverlight Spy) Random winner will be announced on March 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump and @ComponentOne http://mcrump.me/fTSmB8 ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit ComponentOne because they made this possible. MichaelCrump.Net provides Silverlight Giveaways every month. You can also see the latest giveaway by bookmarking http://giveaways.michaelcrump.net . ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The Live Demos of the Silverlight Controls is located here. The XAP Optimizer page is here. One thing that I liked about the help documentation is that you can grab a PDF that only contains documentation for that control. This allows you to get the information you need without going through several hundred pages. You can also download the full documentation from their site.  ComponentOne Silverlight Controls I recently built a hobby project and decided to use ComponentOne Silverlight Controls. The main reason for this is that the controls are heavily documented, they look great and getting help was just a tweet or forum click away. So, the first question that you may ask is, “What is included?” Here is the official list below. I wanted to show several of the controls that I think developers will use the most. 1) ComponentOne’s Image Control – Display animated GIF images on your Silverlight pages as you would in traditional Web apps. Add attractive visuals with minimal effort. 2) HTML Host - Render HTML and arbitrary URI content from within Silverlight. 3) Chart3D - Create 3D surface charts with options for contour levels, zones, a chart legend and more. 4) PDFViewer - View PDF files in Silverlight! That is just a fraction of the controls available. If you want to check out several of them in a “real” application then check out my Silverlight page at http://michaelcrump.info. This brings me to the second part of the giveaway. XAP Optimizer – Is designed to reduce the size of your XAP File. It also includes built-in obfuscation and signing. With my personal project, I decided to use the XAP Optimizer by ComponentOne. It was so easy to use. You basically give it your .XAP file and it provides an output file. If you prefer to prune unused references manually then you can prune your XAP file manually by selecting the option below. I went ahead and added Obfuscation just to try it out and it worked great. You may notice from the screenshot below that I only obfuscated assemblies that I built. The other dlls anyone can grab off the net so we have no reason to obfuscate them. You also have the option to automatically sign your .xap with the SN.exe tool. So how did it turn out? Well, I reduced my XAP size from 2.4 to 1.8 with simply a click of a button. I added obfuscation with a click of a button: Screenshot of no obfuscation on my XAP File   Screenshot of obfuscation on my XAP File with XAP Optimizer.   So, with 2 button clicks, I reduce my XAP file and obfuscated my assembly. What else can you want? Well, they provide a nice HTML report that gives you an optimization summary. So what if you don’t want to launch this tool every time you deploy a Silverlight application? Well the official documentation provided a way to do it in your built event in Visual Studio. Click the Build Events tab on the left side of the Properties window. Enter the following command in the Post-build event command line: $Program Files\ComponentOne\XapOptimizer\XapOptimizer.exe /cmd /p:$(ProjectDir)$(ProjectName).xoproj In the end, this is a great product. I love code that I don’t have to write and utilities that just work. ComponentOne delivers with both the Silverlight Controls and the XAP Optimizer. Don’t forget to leave a comment below in order to win a set of the controls! Subscribe to my feed

    Read the article

  • Day 4 of Oracle OpenWorld 2012 October 3rd

    - by Maria Colgan
    Thanks to all those who stopped by the demogrounds  to chat with the Optimizer developers and to check out what is new in the Oracle Optimizer over the last two days. Remember, today is the last day of the demogrounds, so if you haven't had a chance to stop by and collect you bumper sticker yet, do so today. The Optimizer developers will be there from 9:45 am until 4pm. Don't forget our second technical session (Session CON8457) is tomorrow at 12:45pm. +Maria Colgan

    Read the article

  • network will not work properly after having run TCP optimizer, but safe mode settings work perfectly. how to restore?

    - by michele
    I was experiencing some issues with my connection while playing online and I tried to optimize it by running TCP optimizer on my PC (Windows 7 64bit professional). I thought maybe the situation could improve. but it didn't. actually, I now get an extremely slow page loading time, probably due to a very low RWIN value of 1024. I understand that Windows 7 has a system to automatically adjust the RWIN value when needed. The setting from netsh is "normal" so I guess something else must be wrong. I tried every automatic tool out there to restore Windows' default values, but I had no success. I currently have what should be labeled as "default values" for everything TCP Optimizer initially changed, but the problem persists. The thing is, I just found out that running Windows in safe mode SOLVES the problem completely. The problem is that as soon as I reboot, I get the same issue all over again. So my question is: is there a way to use SAFE MODE network settings in NORMAL mode?

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • 11g???????????????

    - by Liu Maclean(???)
    11g???????????????? ??11g?auto stats gather job????auto task?,???10g?????????: SQL> select client_name,status from DBA_AUTOTASK_CLIENT; CLIENT_NAME STATUS ---------------------------------------------------------------- -------- auto optimizer stats collection ENABLED auto space advisor ENABLED sql tuning advisor ENABLED begin DBMS_AUTO_TASK_ADMIN.DISABLE(client_name => 'auto optimizer stats collection', operation => NULL, window_name => NULL); end; / PL/SQL procedure successfully completed. SQL> select client_name,status from DBA_AUTOTASK_CLIENT; CLIENT_NAME STATUS ---------------------------------------------------------------- -------- auto optimizer stats collection DISABLED auto space advisor ENABLED sql tuning advisor ENABLED

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 25 (sys.dm_db_missing_index_details)

    - by Tamarick Hill
    The sys.dm_db_missing_index_details Dynamic Management View is used to return information about missing indexes on your SQL Server instances. These indexes are ones that the optimizer has identified as indexes it would like to use but did not have. You may also see these same indexes indicated in other tools such as query execution plans or the Database tuning advisor. Let’s execute this DMV so we can review the information it provides us. I do not have any missing index information for my AdventureWorks2012 database, but for the purposes of illustrating the result set of this DMV, I will present the results from my msdb database. SELECT * FROM sys.dm_db_missing_index_details The first column presented is the index_handle which uniquely identifies a particular missing index. The next two columns represent the database_id and the object_id for the particular table in question. Next is the ‘equality_columns’ column which gives you a list of columns (comma separated) that would be beneficial to the optimizer for equality operations. By equality operation I mean for any queries that would use a filter or join condition such as WHERE A = B. The next column, ‘inequality_columns’, gives you a comma separated list of columns that would be beneficial to the optimizer for inequality operations. An inequality operation is anything other than A = B. For example, “WHERE A != B”, “WHERE A > B”, “WHERE A < B”, and “WHERE A <> B” would all qualify as inequality. Next is the ‘included_columns’ column which list all columns that would be beneficial to the optimizer for purposes of providing a covering index and preventing key/bookmark lookups. Lastly is the ‘statement’ column which lists the name of the table where the index is missing. This DMV can help you identify potential indexes that could be added to improve the performance of your system. However, I will advise you not to just take the output of this DMV and create an index for everything you see. Everything listed here should be analyzed and then tested on a Development or Test system before implementing into a Production environment. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms345434.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Showplan Operator of the Week - Lazy Spool

    Continuing to illuminate the depths of SQL Server's Query Optimizer, Fabiano shines a light on the sixth major Showplan Operator on his list: the Lazy Spool. What does the Lazy Spool do that's so special, how does the Query Optimizer use it, and why is it so Lazy? Fabiano explains all...

    Read the article

  • Showplan Operator of the Week - Merge Interval

    When Fabiano agreed to undertake the epic task of describing each showplan operator, none of us quite predicted the interesting ways that the series helps to understand how the query optimizer works. With the Merge Interval, Fabiano comes up with some insights about the way that the Query optimizer handles overlapping ranges efficiently. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice? UPDATE: Here's another try at explaining it. I want the user to be able to fill out an order (manifest) for a custom optimizer, something like ordering off of a Chinese menu - one from column A, one from column B, etc.. Waiter, from column A (updaters), I'll have the BFGS update with Cholesky-decompositon sauce. From column B (line-searchers), I'll have the cubic interpolation line-search with an eta of 0.4 and a rho of 1e-4, please. Etc... UPDATE: Okay, okay. Here's the playing-around that I've done. I offer it reluctantly, because I suspect it's a completely wrong-headed approach. It runs okay under vc++ 2008. #include <boost/utility.hpp> #include <boost/type_traits/integral_constant.hpp> namespace dj { struct CBFGS { void bar() {printf("CBFGS::bar %d\n", data);} CBFGS(): data(1234){} int data; }; template<class T> struct is_CBFGS: boost::false_type{}; template<> struct is_CBFGS<CBFGS>: boost::true_type{}; struct LMQN {LMQN(): data(54.321){} void bar() {printf("LMQN::bar %lf\n", data);} double data; }; template<class T> struct is_LMQN: boost::false_type{}; template<> struct is_LMQN<LMQN> : boost::true_type{}; struct default_optimizer_traits { typedef CBFGS update_type; }; template<class traits> class Optimizer; template<class traits> void foo(typename boost::enable_if<is_LMQN<typename traits::update_type>, Optimizer<traits> >::type& self) { printf(" LMQN %lf\n", self.data); } template<class traits> void foo(typename boost::enable_if<is_CBFGS<typename traits::update_type>, Optimizer<traits> >::type& self) { printf("CBFGS %d\n", self.data); } template<class traits = default_optimizer_traits> class Optimizer{ friend typename traits::update_type; //friend void dj::foo<traits>(typename Optimizer<traits> & self); // How? public: //void foo(void); // How??? void foo() { dj::foo<traits>(*this); } void bar() { data.bar(); } //protected: // How? typedef typename traits::update_type update_type; update_type data; }; } // namespace dj int main_() { dj::Optimizer<> opt; opt.foo(); opt.bar(); std::getchar(); return 0; }

    Read the article

  • Animation effect on C#

    - by Optimizer
    Can someone point me to a C# open source implementaion with a simple image animations. e.g. I feed the input image to animator, and the animation code produces a few dozen of images which if displayed sequentially looks like animation. I am not something extremely fancy - a simple DirectX filter like animations would do. Thank you.

    Read the article

  • How to set _optimizer_search_limit and _optimizer_max_permutations in Oracle10g.

    - by user52856
    I am working on a product that must support both MSSQL and Oracle (10g and 11g). I have some very complex queries that seem to run without issue on MSSQL 2005/2008, but very, very slow with Oracle. The CPU on the oracle server skyrockets for long periods of time, and it seems like the optimizer may be trying to find the best execution plan for the very complex query. I did some Googling to figure out how to limit the amount of time the optimizer spends on this, and came up with _optimizer_search_limit and _optimizer_max_permutations. Both of these parameters are hidden in Oracle 10g, and setting them in init.ora doesn't seem to make any difference. How do I set these parameters in Oracle. Or am I just totally barking up the wrong tree with the assumption that the optimizer is spending several minutes finding an execution plan? Thanks.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • New Whitepaper: Best Practices for Gathering EBS Database Statistics

    - by Elke Phelps (Oracle Development)
    Most Oracle Applications DBAs and E-Business Suite users understand the importance of accurate database statistics.  Missing, stale or skewed statistics can adversely affect performance.  Oracle E-Business Suite statistics should only be gathered using FND_STATS or the Gather Statistics concurrent request. Gathering statistics with DBMS_STATS or the desupported ANALYZE command may result in suboptimal executions plans for E-Business Suite. Our E-Business Suite Performance Team has been busy implementing and testing new features for gathering statistics using FND_STATS in Oracle E-Business Suite databases.  The new features and guidelines for when and how to gather statistics are published in the following whitepaper: Best Practices for Gathering Statistics with Oracle E-Business Suite (Note 1586374.1) The new white paper details the following options for gathering statistics using FND_STATS and the Gather Statistics concurrent request:: History Mode - backup existing statistics prior to gather new statistics GATHER_AUTO Option - gather statistics for tables based upon % change Histograms - collect statistics for histograms AUTO Sampling - use the new FND_STATS feature that supports the AUTO option for using AUTO sample size Extended Statistics - use the new FND_STATS feature that supports the creation of column groups and automatic statistics collection on the column groups when table statistics are gathered Incremental Statistics - gather incremental statistics for partitioned tables The new white paper also includes examples and performance test cases for the following: Extended Optimizer Statistics Incremental Statistics Gathering Concurrent Statistics Gathering This white paper includes details about the standalone Oracle E-Business Suite Release 11i and 12 patches that are required to take advantage of this new functionality. Your feedback is welcome We would be very interested in hearing about your experiences with these new options for gathering statistics.  Please feel free to post your comments here or drop us a line privately.Related Oracle OpenWorld 2013 Session Getting Optimal Performance from Oracle E-Business Suite (CON8485) Related My Oracle Support Notes Collecting Statistics with Oracle EBS 11i and R12 (Note 368252.1) Non-EBS Related Blogs, White Papers and My Oracle Support Notes  Oracle Optimizer Blog Understanding Optimizer Statistic (white paper) Fixed Objects Statistics(GATHER_FIXED_OBJECTS_STATS) Considerations (Note 798257.1)

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • Ant build from Android-generated build file fails - how to fix?

    - by Eno
    Building our Android app from Ant fails with this error: [apply] [apply] UNEXPECTED TOP-LEVEL ERROR: [apply] java.lang.OutOfMemoryError: Java heap space [apply] at java.util.HashMap.<init>(HashMap.java:209) [apply] at java.util.HashSet.<init>(HashSet.java:86) [apply] at com.android.dx.ssa.Dominators.compress(Dominators.java:96) [apply] at com.android.dx.ssa.Dominators.eval(Dominators.java:132) [apply] at com.android.dx.ssa.Dominators.run(Dominators.java:213) [apply] at com.android.dx.ssa.DomFront.run(DomFront.java:84) [apply] at com.android.dx.ssa.SsaConverter.placePhiFunctions(SsaConverter.java:265) [apply] at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:51) [apply] at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:100) [apply] at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:74) [apply] at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:269) [apply] at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:131) [apply] at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:85) [apply] at com.android.dx.command.dexer.Main.processClass(Main.java:297) [apply] at com.android.dx.command.dexer.Main.processFileBytes(Main.java:276) [apply] at com.android.dx.command.dexer.Main.access$100(Main.java:56) [apply] at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:228) [apply] at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) [apply] at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:130) [apply] at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) [apply] at com.android.dx.command.dexer.Main.processOne(Main.java:245) [apply] at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) [apply] at com.android.dx.command.dexer.Main.run(Main.java:139) [apply] at com.android.dx.command.dexer.Main.main(Main.java:120) [apply] at com.android.dx.command.Main.main(Main.java:87) BUILD FAILED Ive tried giving Ant more memory by setting ANT_OPTS="-Xms256m -Xmx512m". (This build machine has 1Gb RAM). Do I just need more memory or is there anything else I can try?

    Read the article

  • How to fix "OutOfMemoryError: java heap space" while compiling MonoDroid App in MonoDevelop

    - by Rodja
    When I try to compile one of my projects, I recently get the following error: Tool /usr/bin/java execution started with arguments: -jar /Applications/android-sdk-mac_x86/platform-tools/lib/dx.jar --no-strict --dex --output=obj/Debug/android/bin/classes.dex obj/Debug/android/bin/classes /Developer/MonoAndroid/usr/lib/mandroid/platforms/android-8/mono.android.jar FlurryAnalytics/Jars/FlurryAgent.jar Jars/android-support-v4.jar UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at com.android.dx.rop.code.RegisterSpecSet.<init>(RegisterSpecSet.java:49) at com.android.dx.rop.code.RegisterSpecSet.mutableCopy(RegisterSpecSet.java:383) at com.android.dx.ssa.LocalVariableInfo.mutableCopyOfStarts(LocalVariableInfo.java:169) at com.android.dx.ssa.LocalVariableExtractor.processBlock(LocalVariableExtractor.java:104) at com.android.dx.ssa.LocalVariableExtractor.doit(LocalVariableExtractor.java:90) at com.android.dx.ssa.LocalVariableExtractor.extract(LocalVariableExtractor.java:56) at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:50) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:99) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:73) at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:273) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:134) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:87) at com.android.dx.command.dexer.Main.processClass(Main.java:487) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:459) at com.android.dx.command.dexer.Main.access$400(Main.java:67) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:398) at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:131) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:109) at com.android.dx.command.dexer.Main.processOne(Main.java:422) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:333) at com.android.dx.command.dexer.Main.run(Main.java:209) at com.android.dx.command.dexer.Main.main(Main.java:174) at com.android.dx.command.Main.main(Main.java:91) Other projects build as expected. I think I need to increase the heap size for this java build step? But how?

    Read the article

  • Class member functions instantiated by traits [policies, actually]

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate member functions? [Update: I used the wrong term here. It should be "policies" rather than "traits." Traits describe existing classes. Policies prescribe synthetic classes.] The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice? UPDATE: Here's another try at explaining it. I want the user to be able to fill out an order (manifest) for a custom optimizer, something like ordering off of a Chinese menu - one from column A, one from column B, etc.. Waiter, from column A (updaters), I'll have the BFGS update with Cholesky-decompositon sauce. From column B (line-searchers), I'll have the cubic interpolation line-search with an eta of 0.4 and a rho of 1e-4, please. Etc... UPDATE: Okay, okay. Here's the playing-around that I've done. I offer it reluctantly, because I suspect it's a completely wrong-headed approach. It runs okay under vc++ 2008. #include <boost/utility.hpp> #include <boost/type_traits/integral_constant.hpp> namespace dj { struct CBFGS { void bar() {printf("CBFGS::bar %d\n", data);} CBFGS(): data(1234){} int data; }; template<class T> struct is_CBFGS: boost::false_type{}; template<> struct is_CBFGS<CBFGS>: boost::true_type{}; struct LMQN {LMQN(): data(54.321){} void bar() {printf("LMQN::bar %lf\n", data);} double data; }; template<class T> struct is_LMQN: boost::false_type{}; template<> struct is_LMQN<LMQN> : boost::true_type{}; // "Order form" struct default_optimizer_traits { typedef CBFGS update_type; // Selection from column A - updaters }; template<class traits> class Optimizer; template<class traits> void foo(typename boost::enable_if<is_LMQN<typename traits::update_type>, Optimizer<traits> >::type& self) { printf(" LMQN %lf\n", self.data); } template<class traits> void foo(typename boost::enable_if<is_CBFGS<typename traits::update_type>, Optimizer<traits> >::type& self) { printf("CBFGS %d\n", self.data); } template<class traits = default_optimizer_traits> class Optimizer{ friend typename traits::update_type; //friend void dj::foo<traits>(typename Optimizer<traits> & self); // How? public: //void foo(void); // How??? void foo() { dj::foo<traits>(*this); } void bar() { data.bar(); } //protected: // How? typedef typename traits::update_type update_type; update_type data; }; } // namespace dj int main() { dj::Optimizer<> opt; opt.foo(); opt.bar(); std::getchar(); return 0; }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >