Search Results

Search found 15364 results on 615 pages for 'static assert'.

Page 367/615 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • issue with std::advance on std::sets

    - by tim
    I've stumbled upon what I believe is a bug in the stl algorithm advance. When I'm advancing the iterator off of the end of the container, I get inconsistent results. Sometimes I get container.end(), sometimes I get the last element. I've illustrated this with the following code: #include <algorithm> #include <cstdio> #include <set> using namespace std; typedef set<int> tMap; int main(int argc, char** argv) { tMap::iterator i; tMap the_map; for (int i=0; i<10; i++) the_map.insert(i); #if EXPERIMENT==1 i = the_map.begin(); #elif EXPERIMENT==2 i = the_map.find(4); #elif EXPERIMENT==3 i = the_map.find(5); #elif EXPERIMENT==4 i = the_map.find(6); #elif EXPERIMENT==5 i = the_map.find(9); #elif EXPERIMENT==6 i = the_map.find(2000); #else i = the_map.end(); #endif advance(i, 100); if (i == the_map.end()) printf("the end\n"); else printf("wuh? %d\n", *i); return 0; } Which I get the following unexpected (according to me) behavior in experiment 3 and 5 where I get the last element instead of the_map.end(). [tim@saturn advance]$ uname -srvmpio Linux 2.6.18-1.2798.fc6 #1 SMP Mon Oct 16 14:37:32 EDT 2006 i686 athlon i386 GNU/Linux [tim@saturn advance]$ g++ --version g++ (GCC) 4.1.1 20061011 (Red Hat 4.1.1-30) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [tim@saturn advance]$ g++ -DEXPERIMENT=1 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=2 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=3 advance.cc [tim@saturn advance]$ ./a.out wuh? 9 [tim@saturn advance]$ g++ -DEXPERIMENT=4 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=5 advance.cc [tim@saturn advance]$ ./a.out wuh? 9 [tim@saturn advance]$ g++ -DEXPERIMENT=6 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=7 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ From the sgi website (see link at top), it has the following example: list<int> L; L.push_back(0); L.push_back(1); list<int>::iterator i = L.begin(); advance(i, 2); assert(i == L.end()); I would think that the assertion should apply to other container types, no? What am I missing? Thanks!

    Read the article

  • Groupted table view cells not loading.

    - by Tejaswi Yerukalapudi
    Hi, I'm working on creating a grouped table view. The data is being loaded alright, but in the grouped view there are a lot of white empty spaces. They get populated after I scroll up and down a few times. Help? Here's my getCellForRowIndexAtPath method: static NSString *Id= @"CustomDiagChargeID"; CustomCellDiagCharges *cell = (CustomCellDiagCharges *)[tableView dequeueReusableCellWithIdentifier:Id]; if(cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"CustomCellDiagCharges" owner:self options:nil]; for (id oneObject in nib) { if([oneObject isKindOfClass:[CustomCellDiagCharges class]]) cell = (CustomCellDiagCharges *) oneObject; } } NSUInteger row = [indexPath row]; DiagDetails *rowData = [preferences getDiagElementAt:indexPath.section row:row]; cell.code.text = rowData.ICD9Code; cell.desc.text = rowData.ICD9Desc; return cell; Thanks, Teja.

    Read the article

  • mvc2 host on iis6 : The incoming request does not match any route.

    - by Sefer KILIÇ
    I have to host my project on iis6, I can not change iis setting on server. So, I modified global.asax like below. But when I browse project I got error like : The incoming request does not match any route. have any idea? thanks public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute("Default", // Route name "{controller}.aspx/{action}/{id}", new { controller = "Home", action = "Index", id = "" } // Parameter defaults ) ); routes.MapRoute("Detail", // Route name "{controller}.aspx/{action}/{id}/{sid}", new { controller = "Home", action = "Index", id = "", sid="" } // Parameter defaults ) ); routes.MapRoute("ForGoogle", // Route name "{controller}.aspx/{action}/{friendlyUrl}/{id}/{partialName}", new { controller = "Home", action = "Index", friendlyUrl = "", id = "", partialName =""} // Parameter defaults ) ); routes.MapRoute( "PostFeed", "Feed/{type}", new { controller = "Product", action = "PostFeed", type = "rss" } ); }

    Read the article

  • Play! - Expecting a stack map frame in method controllers

    - by Benny
    I am using the Security module for my Play! application and had it working at one point, but for some reason I did something to make it stop working. I am getting the following errors: Execution exception VerifyError occured : Expecting a stack map frame in method controllers.Secure$Security.authentify(Ljava/lang/String;Ljava/lang/String;)Z at offset 33 In {module:secure}/app/controllers/Secure.java (around line 61) I saw the post below, but, even though I am using Java 7, it looks like Play! works ok with 7 now. I am using Play 1.2.4. VerifyError; Expecting a stack map frame in method controllers.Secure$Security.authentify Here is my Security controller: package controllers; import models.*; public class Security extends Secure.Security { public static boolean authenticate(String username, String password) { User user = User.find("byEmail", username).first(); return user != null && user.password.equals(password); } }

    Read the article

  • Why exception occured at LoadTypeLibEx System.ArgumentException: Value does not fall within the exp

    - by Usman
    Hello, I am loading type library in C++/CLI. In C# its loading successfully but it's giving again and again following exception in managed C++/CLI. exception occured at LoadTypeLibEx System.ArgumentException: Value does not fall within the expected range at LoadTypeLib(String strTypeLibName, ITypeLi b typeLib) Here's a PInvoke Signature: [DllImport("oleaut32.dll", CharSet = CharSet::Unicode, PreserveSig = false)] static void LoadTypeLib(String^ strTypeLibName,[MarshalAs(UnmanagedType::Interface)] [Out] System::Runtime::InteropServices::ComTypes::ITypeLib^ typeLib); ITypeLib^ oTypeLib; and call LoadtypeLib(TLB,oTypeLib); I am stuck here..kindly give me way around to get rid of this exception Regards Usman

    Read the article

  • Sharepoint Web Part OnPreRender PostBack Dynamic TextBoxes Not Accessible In Button Handler

    - by Matt
    I have a custom webpart which extends System.Web.UI.WebControls.WebPart and implements an EditorPart. I have all the static controls being added in the overridden CreateChildControls method as I know this makes them persistent across PostBacks. However, I have some TextBoxes being added in the overridden OnPreRender method because the actual TextBoxes that I add depend upon the data returned from a Web Service which I am calling in OnPreRender. The Web Service has to be called in OnPreRender because I need some of the Property values that are set in the EditorPart. If I build this logic into the CreateChildControls method, obviously the data is not available on first PostBack after an edit is applied because the PostBack events are restored after CreateChildControls. This means the page has to be posted twice before the data is refreshed and that is just cludgy. How can make these TextBoxes along with their entered text values persistent across PostBacks so that I can access them in the button handler? Thanks for any help, Matt

    Read the article

  • System.Reflection - Global methods aren't available for reflection

    - by mrjoltcola
    I have an issue with a semantic gap between the CLR and System.Reflection. System.Reflection does not (AFAIK) support reflecting on global methods in an assembly. At the assembly level, I must start with the root types. My compiler can produce assemblies with global methods, and my standard bootstrap lib is a dll that includes some global methods. My compiler uses System.Reflection to import assembly metadata at compile time. It seems if I depend on System.Reflection, global methods are not a possibility. The cleanest solution is to convert all of my standard methods to class static methods, but the point is, my language allows global methods, and the CLR supports it, but System.Reflection leaves a gap. ildasm shows the global methods just fine, but I assume it does not use System.Reflection itself and goes right to the metadata and bytecode. Besides System.Reflection, is anyone aware of any other 3rd party reflection or disassembly libs that I could make use of (assuming I will eventually release my compiler as free, BSD licensed open source).

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Updating multiple Sprites - AS3 performance best practices

    - by dani
    Within the container "BubbleContainer" I have multiple "Bubble sprites". Each bubble's graphics object (a circle) is updated on a timer event. Let's say I have 50 Bubble sprites and each circle's radius should be updated with a mathematical formula. How do I organize this logic? How do I update all Bubble sprites within the BubbleContainer? (should I call a bubble.update() function or make a temporary reference to the graphics object?) Where do I put the Math logic? (as static functions?)

    Read the article

  • How to use WSDL2Java generated files?

    - by vikasde
    I generated the .java files using wsdl2java found in axis2-1.5. Now it generated the files in this folder structure: src/net/mycompany/www/services/ The files in the services folder are: SessionIntegrationStub and SessionIntegrationCallbackHandler. I would like to consume the webservice now. I added the net folder to the CLASSPATH environment variable. My java file now imports the webservice using: import net.mycompany.www.services; public class test { public static void main(String[] args) { SessionIntegrationStub stub = new SessionIntegrationStub(); System.out.println(stub.getSessionIntegration("test")); } } Now when I try to compile this using: javac test.java I get: package net.mycompany.www does not exist. Any idea?

    Read the article

  • Generic method to create deep copy of all elements in a collection

    - by bwarner
    I have various ObservableCollections of different object types. I'd like to write a single method that will take a collection of any of these object types and return a new collection where each element is a deep copy of elements in the given collection. Here is an example for a specifc class private static ObservableCollection<PropertyValueRow> DeepCopy(ObservableCollection<PropertyValueRow> list) { ObservableCollection<PropertyValueRow> newList = new ObservableCollection<PropertyValueRow>(); foreach (PropertyValueRow rec in list) { newList.Add((PropertyValueRow)rec.Clone()); } return newList; } How can I make this method generic for any class which implements ICloneable?

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Does Java support dynamic method invocation?

    - by eSKay
    class A { void F() { System.out.println("a"); }} class B extends A { void F() { System.out.println("b"); }} public class X { public static void main(String[] args) { A objA = new B(); objA.F(); } } Here, F() is being invoked dynamically, isn't it? This article says ... the Java bytecode doesn’t support dynamic method invocation. There are three supported invocations modes : invokestatic, invokespecial, invokeinterface or invokevirtual. These modes allows to call methods with known signature. We talk of strongly typed language. This allows to to make some checks directly at compile time. On the other side, the dynamic languages use dynamic types. So we can call a method unknown at the compile time, but that’s completely impossible with the Java bytecode. What am I missing?

    Read the article

  • Avoiding unsafe cast for generic situation involving runtime passing of class

    - by Bart van Heukelom
    public class AutoKeyMap<K,V> { public interface KeyGenerator<K> { public K generate(); } private KeyGenerator<K> generator; public AutoKeyMap(Class<K> keyType) { // WARNING: Unchecked cast from AutoKeyMap.IntKeyGen to AutoKeyMap.KeyGenerator<K> if (keyType == Integer.class) generator = (KeyGenerator<K>) new IntKeyGen(); else throw new RuntimeException("Cannot generate keys for " + keyType); } public void put(V value) { K key = generator.generate(); ... } private static class IntKeyGen implements KeyGenerator<Integer> { private final AtomicInteger ai = new AtomicInteger(1); @Override public Integer generate() { return ai.getAndIncrement(); } } } In the code sample above, what is the correct way to prevent the given warning, without adding a @SuppressWarnings, if any?

    Read the article

  • Why the following Java code has different outputs each time?

    - by Maxood
    I don't know about threads in Java. I like to know what is happening in this code because each time it runs, it produces a different output: public class TwoThreadsDemo{ public static void main(String[] args) { new SimpleThread("Java Programmer").start(); new SimpleThread("Java Programmer").start(); } } class SimpleThread extends Thread{ public SimpleThread(String str) { super(str); } public void run() { for (int i=0;i<10;i++) { System.out.println(i + " " + getName()); try { sleep((long)(Math.random()*1000)); } catch(InterruptedException e) { } } System.out.println("Done!" + getName()); } }

    Read the article

  • LibraryContainer in a ScatterViewItem: resizing and background rectangle...

    - by Rob Fleming
    Simple one: Want to add a LibraryContainer to a Surface ScatterView. Know I have to add the container inside a ScatterViewItem to get the rotate/move features.. but the SVI adds a rectangle box around the control, and it does not size correctly. Think I'm missing something simple but can't figure it... My current XAML is as follows: Background="{StaticResource WindowBackground}" AllowDrop="True" . . . Any thoughts are appreciated... I've been looking at the how-to samples but the library controls that are shown are static item. (ie they are not movable)... Rob

    Read the article

  • reporting services: use a custom assembly with a local (RDLC) report

    - by JMarsch
    Hello: I am designing a report that will be used in local mode (an RDLC file) in a Winform app. I have a custom assembly with a static class that has some functions that I want to use inside of the report (as expressions). I have found all sorts of help for doing this with RDL reports, but I'm running into a permissions problem with my RDLC report. I get the following error at runtime: "The report references the code module (my module), which is not a trusted assembly". I know that this is some kind of a code security issue, but I'm not sure what to do to fix it. The documentation that I have seen online is aimed at RDL reports, and it instructs me to edit a SQL Server-specific policy file. I'm using RDLC, so there is no sql server involved. What do I need to do to acquire the appropriate permissions?

    Read the article

  • What is the best way to provide an AutoMappingOverride for an interface in fluentnhibernate automapp

    - by Tom
    In my quest for a version-wide database filter for an application, I have written the following code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using FluentNHibernate.Automapping; using FluentNHibernate.Automapping.Alterations; using FluentNHibernate.Mapping; using MvcExtensions.Model; using NHibernate; namespace MvcExtensions.Services.Impl.FluentNHibernate { public interface IVersionAware { string Version { get; set; } } public class VersionFilter : FilterDefinition { const string FILTERNAME = "MyVersionFilter"; const string COLUMNNAME = "Version"; public VersionFilter() { this.WithName(FILTERNAME) .WithCondition("Version = :"+COLUMNNAME) .AddParameter(COLUMNNAME, NHibernateUtil.String ); } public static void EnableVersionFilter(ISession session,string version) { session.EnableFilter(FILTERNAME).SetParameter(COLUMNNAME, version); } public static void DisableVersionFilter(ISession session) { session.DisableFilter(FILTERNAME); } } public class VersionAwareOverride : IAutoMappingOverride<IVersionAware> { #region IAutoMappingOverride<IVersionAware> Members public void Override(AutoMapping<IVersionAware> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } } But, since overrides do not work on interfaces, I am looking for a way to implement this. Currently I'm using this (rather cumbersome) way for each class that implements the interface : public class SomeVersionedEntity : IModelId, IVersionAware { public virtual int Id { get; set; } public virtual string Version { get; set; } } public class SomeVersionedEntityOverride : IAutoMappingOverride<SomeVersionedEntity> { #region IAutoMappingOverride<SomeVersionedEntity> Members public void Override(AutoMapping<SomeVersionedEntity> mapping) { mapping.ApplyFilter<VersionFilter>(); } #endregion } I have been looking at IClassmap interfaces etc, but they do not seem to provide a way to access the ApplyFilter method, so I have not got a clue here... Since I am probably not the first one who has this problem, I am quite sure that it should be possible; I am just not quite sure how this works.. EDIT : I have gotten a bit closer to a generic solution: This is the way I tried to solve it : Using a generic class to implement alterations to classes implementing an interface : public abstract class AutomappingInterfaceAlteration<I> : IAutoMappingAlteration { public void Alter(AutoPersistenceModel model) { model.OverrideAll(map => { var recordType = map.GetType().GetGenericArguments().Single(); if (typeof(I).IsAssignableFrom(recordType)) { this.GetType().GetMethod("overrideStuff").MakeGenericMethod(recordType).Invoke(this, new object[] { model }); } }); } public void overrideStuff<T>(AutoPersistenceModel pm) where T : I { pm.Override<T>( a => Override(a)); } public abstract void Override<T>(AutoMapping<T> am) where T:I; } And a specific implementation : public class VersionAwareAlteration : AutomappingInterfaceAlteration<IVersionAware> { public override void Override<T>(AutoMapping<T> am) { am.Map(x => x.Version).Column("VersionTest"); am.ApplyFilter<VersionFilter>(); } } Unfortunately I get the following error now : [InvalidOperationException: Collection was modified; enumeration operation may not execute.] System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) +51 System.Collections.Generic.Enumerator.MoveNextRare() +7661017 System.Collections.Generic.Enumerator.MoveNext() +61 System.Linq.WhereListIterator`1.MoveNext() +156 FluentNHibernate.Utils.CollectionExtensions.Each(IEnumerable`1 enumerable, Action`1 each) +239 FluentNHibernate.Automapping.AutoMapper.ApplyOverrides(Type classType, IList`1 mappedProperties, ClassMappingBase mapping) +345 FluentNHibernate.Automapping.AutoMapper.MergeMap(Type classType, ClassMappingBase mapping, IList`1 mappedProperties) +43 FluentNHibernate.Automapping.AutoMapper.Map(Type classType, List`1 types) +566 FluentNHibernate.Automapping.AutoPersistenceModel.AddMapping(Type type) +85 FluentNHibernate.Automapping.AutoPersistenceModel.CompileMappings() +746 EDIT 2 : I managed to get a bit further; I now invoke "Override" using reflection for each class that implements the interface : public abstract class PersistenceOverride<I> { public void DoOverrides(AutoPersistenceModel model,IEnumerable<Type> Mytypes) { foreach(var t in Mytypes.Where(x=>typeof(I).IsAssignableFrom(x))) ManualOverride(t,model); } private void ManualOverride(Type recordType,AutoPersistenceModel model) { var t_amt = typeof(AutoMapping<>).MakeGenericType(recordType); var t_act = typeof(Action<>).MakeGenericType(t_amt); var m = typeof(PersistenceOverride<I>) .GetMethod("MyOverride") .MakeGenericMethod(recordType) .Invoke(this, null); model.GetType().GetMethod("Override").MakeGenericMethod(recordType).Invoke(model, new object[] { m }); } public abstract Action<AutoMapping<T>> MyOverride<T>() where T:I; } public class VersionAwareOverride : PersistenceOverride<IVersionAware> { public override Action<AutoMapping<T>> MyOverride<T>() { return am => { am.Map(x => x.Version).Column(VersionFilter.COLUMNNAME); am.ApplyFilter<VersionFilter>(); }; } } However, for one reason or another my generated hbm files do not contain any "filter" fields.... Maybe somebody could help me a bit further now ??

    Read the article

  • Can I use a class as an object in C# to create instances of that class?

    - by Troy
    I'm writing a fairly uncomplicated program which can "connect" to several different types of data sources including text files and various databases. I've decided to implement each of these connection types as a class inherited from an interface I called iConnection. So, for example, I have TextConnection, MySQLConnection, &c... as classes. In another static class I've got a dictionary with human-readable names for these connections as keys. For the value of each dictionary entry, I want the class itself. That way, I can do things like: newConnection = new dict[connectionTypeString](); Is there a way to do something like this? I'm fairly new to C# so I'd appreciate any help.

    Read the article

  • XFBML fb:login_button only loading 20% of the time

    - by kalpaitch
    I have a fb:login-button that is working but the button only displays about 20% of the time I load the page. Have a look here to see what I mean, bearing in mind I have never had to refresh the page more than 20 times before I finally turns up. What am I doing wrong...( Head: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> Body: <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_GB" type="text/javascript"></script><script type="text/javascript">FB.init("sd89897sf98d9d9d9d8s98798798s7d");</script> <fb:login-button v='2' autologoutlink='true' size='medium' onlogin='window.location=\"/PHP/FBcheckLogin.php\";'>Connect with Facebook</fb:login-button>

    Read the article

  • Creating a Compilation Unit with type bindings

    - by hizki
    I am working with the AST API in java, and I am trying to create a Compilation Unit with type bindings. I wrote the following code: private static CompilationUnit parse(ICompilationUnit unit) { ASTParser parser = ASTParser.newParser(AST.JLS3); parser.setKind(ASTParser.K_COMPILATION_UNIT); parser.setSource(unit); parser.setResolveBindings(true); CompilationUnit compiUnit = (CompilationUnit) parser.createAST(null); return compiUnit; } Unfortunately, when I run this code in debug mode and inspect compiUnit I find that compiUnit.resolver.isRecoveringBindings is false. Can anyone think of a reason why it wouldn't be true, as I specified it to be? Thank you

    Read the article

  • Using the Data Form Web Part (SharePoint 2010) Site Agnostically!

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/24/154465.aspxAs a Developer whom has worked closely with web designers (Power users) in a SharePoint environment, I have come across the issue of making the Data Form Web Part reusable across the site collection! In SharePoint 2007 it was very easy and this blog pointed the way to make it happen: Josh Gaffey's Blog. In SharePoint 2010 something changed! This method failed except for using a Data Form Web Part that pointed to a list in the Site Collection Root! I am making this discussion relative to a developer whom creates a solution (WSP) with all the artifacts embedded and the user shouldn’t have any involvement in the process except to activate features. The Scenario: 1. A Power User creates a Data Form Web Part using SharePoint Designer 2010! It is a great web part the uses all the power of SharePoint Designer and XSLT (Conditional formatting, etc.). 2. Other Users in the site collection want to use that specific web part in sub sites in the site collection. Pointing to a list with the same name, not at the site collection root! The Issues: 1. The Data Form Web Part Data Source uses a List ID (GUID) to point to the specific list. Which means a list in a sub site will have a list with a new GUID different than the one which was created with SharePoint Designer! Obviously, the List needs to be the same List (Fields, Content Types, etc.) with different data. 2. How can we make this web part site agnostic, and dependent only on the lists Name? I had this problem come up over and over and decided to put my solution forward! The Solution: 1. Use the XSL of the Data Form Web Part Created By the Power User in SharePoint Designer! 2. Extend the OOTB Data Form Web Part to use this XSL and Point to a List by name. The solution points to a hybrid solution that requires some coding (Developer) and the XSL (Power User) artifacts put together in a Visual Studio SharePoint Solution. Here are the solution steps in summary: 1. Create an empty SharePoint project in Visual Studio 2. Create a Module and Feature and put the XSL file created by the Power User into it a. Scope the feature to web 3. Create a Feature Receiver to Create the List. The same list from which the Data Form Web Part was created with by the Power User. a. Scope the feature to web 4. Create a Web Part extending the Data Form Web a. Point the Data Form Web Part to point to the List by Name b. Point the Data Form Web Part XSL link to the XSL added using the Module feature c. Scope The feature to Site i. This is because all web parts are in the site collection web part gallery. So in a Narrative Summary: We are creating a list in code which has the same name and (site Columns) as the list from which the Power User created the Data Form Web Part Using SharePoint Designer. We are creating a Web Part in code which extends the OOTB Data Form Web Part to point to a list by name and use the XSL created by the Power User. Okay! Here are the steps with images and code! At the end of this post I will provide a link to the code for a solution which works in any site! I want to TOOT the HORN for the power of this solution! It is the mantra a use with all my clients! What is a basic skill a SharePoint Developer: Create an application that uses the data from a SharePoint list and make that data visible to the user in a manner which meets requirements! Create an Empty SharePoint 2010 Project Here I am naming my Project DJ.DataFormWebPart Create a Code Folder Copy and paste the Extension and Utilities classes (Found in the solution provided at the end of this post) Change the Namespace to match this project The List to which the Data Form Web Part which was used to make the XSL by the Power User in SharePoint Designer is now going to be created in code! If already in code, then all the better! Here I am going to create a list in the site collection root and add some data to it! For the purpose of this discussion I will actually create this list in code before using SharePoint Designer for simplicity! So here I create the List and deploy it within this solution before I do anything else. I will use a List I created before for demo purposes. Footer List is used within the footer of my master page. Add a new Feature: Here I name the Feature FooterList and add a Feature Event Receiver: Here is the code for the Event Receiver: I have a previous blog post about adding lists in code so I will not take time to narrate this code: using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; using DJ.DataFormWebPart.Code; namespace DJ.DataFormWebPart.Features.FooterList { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("a58644fd-9209-41f4-aa16-67a53af7a9bf")] public class FooterListEventReceiver : SPFeatureReceiver { SPWeb currentWeb = null; SPSite currentSite = null; const string columnGroup = "DJ"; const string ctName = "FooterContentType"; // Uncomment the method below to handle the event raised after a feature has been activated. public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { using (SPSite site = new SPSite(spWeb.Site.ID)) { using (SPWeb rootWeb = site.OpenWeb(site.RootWeb.ID)) { //add the fields addFields(rootWeb); //add content type SPContentType testCT = rootWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(rootWeb, ctName); } if ((spWeb.Lists.TryGetList("FooterList") == null)) { //create the list if it dosen't to exist CreateFooterList(spWeb, site); } } } } } #region ContentType public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Link", SPFieldType.URL, false, columnGroup); Utilities.addField(spWeb, "Information", SPFieldType.Text, false, columnGroup); } private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Item"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Link", ct); Utilities.addContentTypeLink(spWeb, "Information", ct); } private void CreateFooterList(SPWeb web, SPSite site) { Guid newListGuid = web.Lists.Add("FooterList", "Footer List", SPListTemplateType.GenericList); SPList newList = web.Lists[newListGuid]; newList.ContentTypesEnabled = true; var footer = site.RootWeb.ContentTypes[ctName]; newList.ContentTypes.Add(footer); newList.ContentTypes.Delete(newList.ContentTypes["Item"].Id); newList.Update(); var view = newList.DefaultView; //add all view fields here //view.ViewFields.Add("NewsTitle"); view.ViewFields.Add("Link"); view.ViewFields.Add("Information"); view.Update(); } } } Basically created a content type with two site columns Link and Information. I had to change some code as we are working at the SPWeb level and need Content Types at the SPSite level! I’ll use a new Site Collection for this demo (Best Practice) keep old artifacts from impinging on development: Next we will add this list to the root of the site collection by deploying this solution, add some data and then use SharePoint Designer to create a Data Form Web Part. The list has been added, now let’s add some data: Okay let’s add a Data Form Web Part in SharePoint Designer. Create a new web part page in the site pages library: I will name it TestWP.aspx and edit it in advanced mode: Let’s add an empty Data Form Web Part to the web part zone: Click on the web part to add a data source: Choose FooterList in the Data Source menu: Choose appropriate fields and select insert as multiple item view: Here is what it look like after insertion: Let’s add some conditional formatting if the information filed is not blank: Choose Create (right side) apply formatting: Choose the Information Field and set the condition not null: Click Set Style: Here is the result: Okay! Not flashy but simple enough for this demo. Remember this is the job of the Power user! All we want from this web part is the XLS-Style Sheet out of SharePoint Designer. We are going to use it as the XSL for our web part which we will be creating next. Let’s add a web part to our project extending the OOTB Data Form Web Part. Add new item from the Visual Studio add menu: Choose Web Part: Change WebPart to DataFormWebPart (Oh well my namespace needs some improvement, but it will sure make it readily identifiable as an extended web part!) Below is the code for this web part: using System; using System.ComponentModel; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Text; namespace DJ.DataFormWebPart.DataFormWebPart { [ToolboxItemAttribute(false)] public class DataFormWebPart : Microsoft.SharePoint.WebPartPages.DataFormWebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); this.ChromeType = PartChromeType.None; this.Title = "FooterListDF"; try { //SPSite site = SPContext.Current.Site; SPWeb web = SPContext.Current.Web; SPList list = web.Lists.TryGetList("FooterList"); if (list != null) { string queryList1 = "<Query><Where><IsNotNull><FieldRef Name='Title' /></IsNotNull></Where><OrderBy><FieldRef Name='Title' Ascending='True' /></OrderBy></Query>"; uint maximumRowList1 = 10; SPDataSource dataSourceList1 = GetDataSource(list.Title, web.Url, list, queryList1, maximumRowList1); this.DataSources.Add(dataSourceList1); this.XslLink = web.Url + "/Assests/Footer.xsl"; this.ParameterBindings = BuildDataFormParameters(); this.DataBind(); } } catch (Exception ex) { this.Controls.Add(new LiteralControl("ERROR: " + ex.Message)); } } private SPDataSource GetDataSource(string dataSourceId, string webUrl, SPList list, string query, uint maximumRow) { SPDataSource dataSource = new SPDataSource(); dataSource.UseInternalName = true; dataSource.ID = dataSourceId; dataSource.DataSourceMode = SPDataSourceMode.List; dataSource.List = list; dataSource.SelectCommand = "" + query + ""; Parameter listIdParam = new Parameter("ListID"); listIdParam.DefaultValue = list.ID.ToString( "B").ToUpper(); Parameter maximumRowsParam = new Parameter("MaximumRows"); maximumRowsParam.DefaultValue = maximumRow.ToString(); QueryStringParameter rootFolderParam = new QueryStringParameter("RootFolder", "RootFolder"); dataSource.SelectParameters.Add(listIdParam); dataSource.SelectParameters.Add(maximumRowsParam); dataSource.SelectParameters.Add(rootFolderParam); dataSource.UpdateParameters.Add(listIdParam); dataSource.DeleteParameters.Add(listIdParam); dataSource.InsertParameters.Add(listIdParam); return dataSource; } private string BuildDataFormParameters() { StringBuilder parameters = new StringBuilder("<ParameterBindings><ParameterBinding Name=\"dvt_apos\" Location=\"Postback;Connection\"/><ParameterBinding Name=\"UserID\" Location=\"CAMLVariable\" DefaultValue=\"CurrentUserName\"/><ParameterBinding Name=\"Today\" Location=\"CAMLVariable\" DefaultValue=\"CurrentDate\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_firstrow\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_nextpagedata\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocmode\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocfiltermode\" Location=\"Postback;Connection\"/>"); parameters.Append("</ParameterBindings>"); return parameters.ToString(); } } } The OnInit method we use to set the list name and the XSL Link property of the Data Form Web Part. We do not have the link to XSL in our Solution so we will add the XSL now: Add a Module in the Visual Studio add menu: Rename Sample.txt in the module to footer.xsl and then copy the XSL from SharePoint Designer Look at elements.xml to where the footer.xsl is being provisioned to which is Assets/footer.xsl, make sure the Web parts xsl link is pointing to this url: Okay we are good to go! Let’s check our features and package: DataFormWebPart should be scoped to site and have the web part: The Footer List feature should be scoped to web and have the Assets module (Okay, I see, a spelling issue but it won’t affect this demo) If everything is correct we should be able to click a couple of sub site feature activations and have our list and web part in a sub site. (In fact this solution can be activated anywhere) Here is the list created at SubSite1 with new data It. Next let’s add the web part on a test page and see if it works as expected: It does! So we now have a repeatable way to use a WSP to move a Data Form Web Part around our sites! Here is a link to the code: DataFormWebPart Solution

    Read the article

  • Problem loading XMLDocument with non standard tags

    - by David Conde
    Hi, I have a code needed to load an XML document from a reader, something like this: private static XmlDocument GetDocumentStream(string xmlAddress) { var settings = new XmlReaderSettings(); settings.DtdProcessing = DtdProcessing.Ignore; settings.ValidationFlags = XmlSchemaValidationFlags.None; var reader = XmlReader.Create(xmlAddress, settings); document.Load(reader); return document; } But in my XML document, I have nodes like this one: <link rel="edit-media" title="Package" href="Packages(Id='51Degrees.mobi',Version='0.1.11.9')/$value" /> Is to my understanding that the node should be like <link rel="edit-media" title="Package"></link> But, I don't create the Xml document and I certainly don't want to change it, but when I try to load the XML document, the document.Load line throws an exception. To be more specific, the XML file is the RSS source for the nuPack project. Any ideas would be very appreaciated on how to be able to read this document properly.

    Read the article

  • PerformanceCounters on .NET 4.0 & Windows 7

    - by scott
    I have a program that works fine on VS2008 and Vista, but I'm trying it on Windows 7 and VS2010 / .NET Framework 4.0 and it's not working. Ultimately the problem is that System.Diagnostics.PerformanceCounterCategory.GetCategories() (and other PerformanceCounterCategory methods) is not working. I'm getting a System.InvalidOperationException with the message "Cannot load Counter Name data because an invalid index '' was read from the registry." I can reproduce this with the very simple program shown below: class Program { static void Main(string[] args) { foreach (var pc in System.Diagnostics.PerformanceCounterCategory.GetCategories()) { Console.WriteLine(pc.CategoryName); } } } I did make sure I'm running the program as an admin. It doesn't matter if I run it with VS/Debugger attached or not. I don't have another machine with Windows 7 or VS2010 to test it on, so I'm not sure which is complicating things here (or both?). It is Windows 7 x64 and I've tried forcing the app to run in both x32 and x64 but get the same results.

    Read the article

  • How to log subsonic3 sql

    - by bastos.sergio
    Hi, I'm starting to develop a new asp.net application based on subsonic3 (for queries) and log4net (for logs) and would like to know how to interface subsonic3 with log4net so that log4net logs the underlying sql used by subsonic. This is what I have so far: public static IEnumerable<arma_ocorrencium> ListArmasOcorrencia() { if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: start"); } var db = new BdvdDB(); var select = from p in db.arma_ocorrencia select p; var results = select.ToList<arma_ocorrencium>(); //Execute the query here if (logger.IsInfoEnabled) { // log sql here } if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: end"); } return results; }

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >