Search Results

Search found 6079 results on 244 pages for 'define'.

Page 175/244 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • How to deploy global managed beans

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} "Global managed" beans is the term I use in this post to describe beans that are used across applications. Global managed beans contain helper - or utility - methods like or instead of JSFUtils and ADFUtils. The difference between global managed beans and static helper classes like JSFUtis and ADFUtils is that they are EL accessible, providing reusable functionality that is ready to use on UI components and - if accessed from Java - in other managed beans. For example, the ADF Faces page template (af:pageTemplate) allows you to define attributes for the consuming page to pass in object references or strings into it. It does not have method attributes that allow command components contained in a template to invoke listeners in managed beans and the ADF binding layer, or to execute actions. To create templates that provide global button or menu functionality, like logon, logout, print etc., an option for developers is to deployed managed beans with the ADF Faces page templates. To deploy a managed bean with a page template, create an ADF library from the project containing the template definition and import this ADF library into the target project using the Resource palette. When importing an ADF library, all its content is added to the project, including page template definitions, managed bean sources and configurations. More about page templates http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_pageTemplate.html Another use-case for globally configured managed beans is for creating helper methods to be used in many applications. Instead of creating a base managed bean class that then is extended by all managed beans used in applications, you can deploy a managed bean in a JAR file and add the faces-config.xml file with the managed bean configuration to the JAR's META-INF folder as shown below. Using a globally configured managed bean allows you to use Expression Language in the UI to access common functionality but also use Java in application specific managed beans. Storing the faces-config.xml file in the JAR file META-INF directory automatically makes it available when the JAR file is found in the class path of an application.

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Is it bad practice to make an iterator that is aware of its own end

    - by aaronman
    For some background of why I am asking this question here is an example. In python the method chain chains an arbitrary number of ranges together and makes them into one without making copies. Here is a link in case you don't understand it. I decided I would implement chain in c++ using variadic templates. As far as I can tell the only way to make an iterator for chain that will successfully go to the next container is for each iterator to to know about the end of the container (I thought of a sort of hack in where when != is called against the end it will know to go to the next container, but the first way seemed easier and safer and more versatile). My question is if there is anything inherently wrong with an iterator knowing about its own end, my code is in c++ but this can be language agnostic since many languages have iterators. #ifndef CHAIN_HPP #define CHAIN_HPP #include "iterator_range.hpp" namespace iter { template <typename ... Containers> struct chain_iter; template <typename Container> struct chain_iter<Container> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end;//never really used but kept it for consistency public: chain_iter(Container & container, bool is_end=false) : begin(container.begin()),end(container.end()) { if(is_end) begin = container.end(); } chain_iter & operator++() { ++begin; return *this; } auto operator*()->decltype(*begin) { return *begin; } bool operator!=(const chain_iter & rhs) const{ return this->begin != rhs.begin; } }; template <typename Container, typename ... Containers> struct chain_iter<Container,Containers...> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end; bool end_reached = false; chain_iter<Containers...> next_iter; public: chain_iter(Container & container, Containers& ... rest, bool is_end=false) : begin(container.begin()), end(container.end()), next_iter(rest...,is_end) { if(is_end) begin = container.end(); } chain_iter & operator++() { if (begin == end) { ++next_iter; } else { ++begin; } return *this; } auto operator*()->decltype(*begin) { if (begin == end) { return *next_iter; } else { return *begin; } } bool operator !=(const chain_iter & rhs) const { if (begin == end) { return this->next_iter != rhs.next_iter; } else return this->begin != rhs.begin; } }; template <typename ... Containers> iterator_range<chain_iter<Containers...>> chain(Containers& ... containers) { auto begin = chain_iter<Containers...>(containers...); auto end = chain_iter<Containers...>(containers...,true); return iterator_range<chain_iter<Containers...>>(begin,end); } } #endif //CHAIN_HPP

    Read the article

  • Construction Paper, Legos, and Architectural Modeling

    I can remember as a kid playing with construction paper and Legos to explore my imagination. Through my exploration I was able to build airplanes, footballs, guns, and more, out of paper. Additionally I could create entire cities, robots, or anything else I could image out of Legos.  These toys, I now realize were in fact tools that gave me an opportunity to explore my ideas in the physical world through the use of modeling.  My imagination was allowed to run wild as I, unknowingly at the time, made design decisions that directly affected the models I was building from the raw materials.  To prove my point further, I can remember building a paper airplane that seemed to go nowhere when I tried to throw it. So I decided to attach a paper clip to the plane before I decided to throw it the next time to test my concept that by adding more weight to the plane that it would fly better and for longer distances. The paper airplane allowed me to model my design decision through the use of creating an artifact in that I created a paper airplane that was carrying extra weight through the incorporation of the paper clip in to the design. Also, I remember using Legos to build all sorts of creations, and these creations became artifacts of my imagination. As I further and further defined my Lego creations through the process of playing I was able to create elaborate artifacts of my imagination. These artifacts represented design decision I had made in the evolution of my creation through my child like design process. In some form or fashion the artifacts I created as a kid are very similar to the artifacts that I create when I model a software architectural concept or a software design in that the process of making decisions is directly translated in to a tangible model in the form of an architectural model. Architectural models have been defined as artifacts that depict design decisions of a system’s architecture.  The act of creating architectural models is the act of architectural modeling. Furthermore, architectural modeling is the process of creating a physical model based architectural concepts and documenting these design decisions. In the process of creating models, the standard notation used is Architectural modeling notation. This notation is the primary method of capturing the essence of design decisions regarding architecture.  Modeling notations can vary based on the need and intent of a project; typically they range from natural language to a diagram based notation. Currently, Unified Markup Language (UML) is the industry standard in terms of architectural modeling notation  because allows for architectures to be defined through a series of boxes, lines, arrows and other basic symbols that encapsulate design designs in to virtual components, connectors, configurations and interfaces.  Furthermore UML allows for additional break down of models through the use of natural language as to explain each section of the model in plain English. One of the major factors in architectural modeling is to define what is to be modeled. As a basic rule of thumb, I tend to model architecture based on the complexity of systems or sub sub-systems of architecture. Another key factor is the level of detail that is actually needed for a model. For example if I am modeling a system for a CEO to view then the low level details will be omitted. In comparison, if I was modeling a system for another engineer to actually implement I would include as much detailed information as I could to help the engineer implement my design.

    Read the article

  • Managing common code on Windows 7 (.NET) and Windows 8 (WinRT)

    - by ryanabr
    Recent announcements regarding Windows Phone 8 and the fact that it will have the WinRT behind it might make some of this less painful but I  discovered the "XmlDocument" object is in a new location in WinRT and is almost the same as it's brother in .NET System.Xml.XmlDocument (.NET) Windows.Data.Xml.Dom.XmlDocument (WinRT) The problem I am trying to solve is how to work with both types in the code that performs the same task on both Windows Phone 7 and Windows 8 platforms. The first thing I did was define my own XmlNode and XmlNodeList classes that wrap the actual Microsoft objects so that by using the "#if" compiler directive either work with the WinRT version of the type, or the .NET version from the calling code easily. public class XmlNode     { #if WIN8         public Windows.Data.Xml.Dom.IXmlNode Node { get; set; }         public XmlNode(Windows.Data.Xml.Dom.IXmlNode xmlNode)         {             Node = xmlNode;         } #endif #if !WIN8 public System.Xml.XmlNode Node { get; set ; } public XmlNode(System.Xml.XmlNode xmlNode)         {             Node = xmlNode;         } #endif     } public class XmlNodeList     { #if WIN8         public Windows.Data.Xml.Dom.XmlNodeList List { get; set; }         public int Count {get {return (int)List.Count;}}         public XmlNodeList(Windows.Data.Xml.Dom.XmlNodeList list)         {             List = list;         } #endif #if !WIN8 public System.Xml.XmlNodeList List { get; set ; } public int Count { get { return List.Count;}} public XmlNodeList(System.Xml.XmlNodeList list)         {             List = list;        } #endif     } From there I can then use my XmlNode and XmlNodeList in the calling code with out having to clutter the code with all of the additional #if switches. The challenge after this was the code that worked directly with the XMLDocument object needed to be seperate on both platforms since the method for populating the XmlDocument object is completly different on both platforms. To solve this issue. I made partial classes, one partial class for .NET and one for WinRT. Both projects have Links to the Partial Class that contains the code that is the same for the majority of the class, and the partial class contains the code that is unique to the version of the XmlDocument. The files with the little arrow in the lower left corner denotes 'linked files' and are shared in multiple projects but only exist in one location in source control. You can see that the _Win7 partial class is included directly in the project since it include code that is only for the .NET platform, where as it's cousin the _Win8 (not pictured above) has all of the code specific to the _Win8 platform. In the _Win7 partial class is this code: public partial class WUndergroundViewModel     { public static WUndergroundData GetWeatherData( double lat, double lng)         { WUndergroundData data = new WUndergroundData();             System.Net. WebClient c = new System.Net. WebClient(); string req = "http://api.wunderground.com/api/xxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); XmlDocument doc = new XmlDocument();             doc.Load(c.OpenRead(req)); foreach (XmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.Node.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break ; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break ; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation" )),data); break ;                 }             } return data;         }     } in _win8 partial class is this code: public partial class WUndergroundViewModel     { public async static Task< WUndergroundData > GetWeatherData(double lat, double lng)         { WUndergroundData data = new WUndergroundData (); HttpClient c = new HttpClient (); string req = "http://api.wunderground.com/api/xxxx/yesterday/conditions/forecast/q/[LAT],[LNG].xml" ;             req = req.Replace( "[LAT]" , lat.ToString());             req = req.Replace( "[LNG]" , lng.ToString()); HttpResponseMessage msg = await c.GetAsync(req); string stream = await msg.Content.ReadAsStringAsync(); XmlDocument doc = new XmlDocument ();             doc.LoadXml(stream, null); foreach ( IXmlNode item in doc.SelectNodes("/response/features/feature" ))             { switch (item.InnerText)                 { case "yesterday" :                         ParseForecast( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/txt_forecast/forecastdays/forecastday" )), new FishingControls.XmlNodeList (doc.SelectNodes( "/response/forecast/simpleforecast/forecastdays/forecastday" )), data); break; case "conditions" :                         ParseCurrent( new FishingControls.XmlNode (doc.SelectSingleNode("/response/current_observation" )), data); break; case "forecast" :                         ParseYesterday( new FishingControls.XmlNodeList (doc.SelectNodes( "/response/history/observations/observation")), data); break;                 }             } return data;         }     } Summary: This method allows me to have common 'business' code for both platforms that is pretty clean, and I manage the technology differences separately. Thank you tostringtheory for your suggestion, I was considering that approach.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Working with Analytic Workflow Manager (AWM) - Part 8 Cube Metadata Analysis

    - by Mohan Ramanuja
    CUBE SIZEselect dbal.owner||'.'||substr(dbal.table_name,4) awname, sum(dbas.bytes)/1024/1024 as mb, dbas.tablespace_name from dba_lobs dbal, dba_segments dbas where dbal.column_name = 'AWLOB' and dbal.segment_name = dbas.segment_name group by dbal.owner, dbal.table_name, dbas.tablespace_name order by dbal.owner, dbal.table_name SESSION RESOURCES select vses.username||':'||vsst.sid username, vstt.name, max(vsst.value) valuefrom v$sesstat vsst, v$statname vstt, v$session vseswhere vstt.statistic# = vsst.statistic# and vsst.sid = vses.sid andVSES.USERNAME LIKE ('ATTRIBDW_OWN') ANDvstt.name in ('session pga memory', 'session pga memory max', 'session uga memory','session uga memory max', 'session cursor cache count', 'session cursor cache hits', 'session stored procedure space', 'opened cursors current', 'opened cursors cumulative') andvses.username is not null group by vsst.sid, vses.username, vstt.name order by vsst.sid, vses.username, vstt.name OLAP PGA USE select 'OLAP Pages Occupying: '|| round((((select sum(nvl(pool_size,1)) from v$aw_calc)) / (select value from v$pgastat where name = 'total PGA inuse')),2)*100||'%' info from dual union select 'Total PGA Inuse Size: '||value/1024||' KB' info from v$pgastat where name = 'total PGA inuse' union select 'Total OLAP Page Size: '|| round(sum(nvl(pool_size,1))/1024,0)||' KB' info from v$aw_calc order by info desc OLAP PGA USAGE PER USER select vs.username, vs.sid, round(pga_used_mem/1024/1024,2)||' MB' pga_used, round(pga_max_mem/1024/1024,2)||' MB' pga_max, round(pool_size/1024/1024,2)||' MB' olap_pp, round(100*(pool_hits-pool_misses)/pool_hits,2) || '%' olap_ratio from v$process vp, v$session vs, v$aw_calc va where session_id=vs.sid and addr = paddr CUBE LOADING SCRIPT REM The 'set define off' statement is needed only if running this script through SQLPlus.REM If you are using another tool to run this script, the line below may be commented out.set define offBEGIN  DBMS_CUBE.BUILD(    'VALIDATE  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/BEGIN  DBMS_CUBE.BUILD(    '  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/ VISUALIZATION OBJECT - AW$ATTRIBDW_OWN  CREATE TABLE "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"        (            "PS#"    NUMBER(10,0),            "GEN#"   NUMBER(10,0),            "EXTNUM" NUMBER(8,0),            "AWLOB" BLOB,            "OBJNAME"  VARCHAR2(256 BYTE),            "PARTNAME" VARCHAR2(256 BYTE)        )        PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE        (            BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "AWLOB"        )        STORE AS SECUREFILE        (            TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        PARTITION BY RANGE        (            "GEN#"        )        SUBPARTITION BY HASH        (            "PS#",            "EXTNUM"        )        SUBPARTITIONS 8        (            PARTITION "PTN1" VALUES LESS THAN (1) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE READS LOGGING NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP661" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP662" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP663" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP664" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP665" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP666" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP667" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP668" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" ) ,            PARTITION "PTNN" VALUES LESS THAN (MAXVALUE) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP669" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP670" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP671" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP672" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP673" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP674" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP675" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP676" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" )        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."ATTRIBDW_OWN_I$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        "PS#", "GEN#", "EXTNUM"    )    PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE    (        INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT    )    TABLESPACE "ATTRIBDW_DATA" ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000406980C00004$$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOCAL (PARTITION "SYS_IL_P711" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP695" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP696" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP697" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP698" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP699" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP700" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP701" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP702" TABLESPACE "ATTRIBDW_DATA" ) , PARTITION "SYS_IL_P712" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP703" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP704" TABLESPACE        "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP705" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP706" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP707" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP708" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP709" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP710" TABLESPACE "ATTRIBDW_DATA" ) ) PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE BUILD LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_BUILD_LOG"        (            "BUILD_ID"          NUMBER,            "SLAVE_NUMBER"      NUMBER,            "STATUS"            VARCHAR2(10 BYTE),            "COMMAND"           VARCHAR2(25 BYTE),            "BUILD_OBJECT"      VARCHAR2(30 BYTE),            "BUILD_OBJECT_TYPE" VARCHAR2(10 BYTE),            "OUTPUT" CLOB,            "AW"            VARCHAR2(30 BYTE),            "OWNER"         VARCHAR2(30 BYTE),            "PARTITION"     VARCHAR2(50 BYTE),            "SCHEDULER_JOB" VARCHAR2(100 BYTE),            "TIME" TIMESTAMP (6)WITH TIME ZONE,        "BUILD_SCRIPT" CLOB,        "BUILD_TYPE"            VARCHAR2(22 BYTE),        "COMMAND_DEPTH"         NUMBER(2,0),        "BUILD_SUB_OBJECT"      VARCHAR2(30 BYTE),        "REFRESH_METHOD"        VARCHAR2(1 BYTE),        "SEQ_NUMBER"            NUMBER,        "COMMAND_NUMBER"        NUMBER,        "IN_BRANCH"             NUMBER(1,0),        "COMMAND_STATUS_NUMBER" NUMBER,        "BUILD_NAME"            VARCHAR2(100 BYTE)        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "OUTPUT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        LOB        (            "BUILD_SCRIPT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00013$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00007$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG" ( PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE DIMENSION COMPILE  CREATE TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"        (            "ID"               NUMBER,            "SEQ_NUMBER"       NUMBER,            "ERROR#"           NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE"    VARCHAR2(2000 BYTE),            "DIMENSION"        VARCHAR2(100 BYTE),            "DIMENSION_MEMBER" VARCHAR2(100 BYTE),            "MEMBER_ANCESTOR"  VARCHAR2(100 BYTE),            "HIERARCHY1"       VARCHAR2(100 BYTE),            "HIERARCHY2"       VARCHAR2(100 BYTE),            "ERROR_CONTEXT" CLOB        )        SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "ATTRIBDW_DATA" LOB        (            "ERROR_CONTEXT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION"IS    'Name of dimension being compiled';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION_MEMBER"IS    'Problem dimension member';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."MEMBER_ANCESTOR"IS    'Problem dimension member''s parent';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY1"IS    'First hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY2"IS    'Second hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_CONTEXT"IS    'Extra information for error';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"IS    'Cube dimension compile log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407307C00010$$" ON "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE( INITIAL 1048576 NEXT 1048576 MAXEXTENTS 2147483645) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE OPERATING LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"        (            "INST_ID"    NUMBER NOT NULL ENABLE,            "SID"        NUMBER NOT NULL ENABLE,            "SERIAL#"    NUMBER NOT NULL ENABLE,            "USER#"      NUMBER NOT NULL ENABLE,            "SQL_ID"     VARCHAR2(13 BYTE),            "JOB"        NUMBER,            "ID"         NUMBER,            "PARENT_ID"  NUMBER,            "SEQ_NUMBER" NUMBER,            "TIME" TIMESTAMP (6)WITH TIME ZONE NOT NULL ENABLE,        "LOG_LEVEL"    NUMBER(4,0) NOT NULL ENABLE,        "DEPTH"        NUMBER(4,0),        "OPERATION"    VARCHAR2(15 BYTE) NOT NULL ENABLE,        "SUBOPERATION" VARCHAR2(20 BYTE),        "STATUS"       VARCHAR2(10 BYTE) NOT NULL ENABLE,        "NAME"         VARCHAR2(20 BYTE) NOT NULL ENABLE,        "VALUE"        VARCHAR2(4000 BYTE),        "DETAILS" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "DETAILS"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."INST_ID"IS    'Instance ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SID"IS    'Session ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SERIAL#"IS    'Session serial#';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."USER#"IS    'User ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SQL_ID"IS    'Executing SQL statement ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."JOB"IS    'Identifier of job';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."PARENT_ID"IS    'Parent operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."TIME"IS    'Time of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."LOG_LEVEL"IS    'Verbosity level of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DEPTH"IS    'Nesting depth of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."OPERATION"IS    'Current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SUBOPERATION"IS    'Current suboperation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."STATUS"IS    'Status of current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."NAME"IS    'Name of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."VALUE"IS    'Value of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DETAILS"IS    'Extra information for record';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"IS    'Cube operations log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407301C00018$$" ON "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE REJECTED RECORDS CREATE TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"        (            "ID"            NUMBER,            "SEQ_NUMBER"    NUMBER,            "ERROR#"        NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE" VARCHAR2(2000 BYTE),            "RECORD#"       NUMBER(38,0),            "SOURCE_ROW" ROWID,            "REJECTED_RECORD" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "REJECTED_RECORD"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."RECORD#"IS    'Rejected record number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SOURCE_ROW"IS    'Rejected record''s ROWID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."REJECTED_RECORD"IS    'Rejected record copy';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"IS    'Cube rejected records log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407304C00007$$" ON "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • SPTI problem with Mode Select

    - by Bob Murphy
    I'm running into a problem in which an attempt to do a "Mode Select" SCSI command using SPTI is returning an error status of 0x02 ("Check Condition"), and hope someone here might have some tips or suggestions. The code in question is intended to work with at a custom SCSI device. I wrote the original support for it using ASPI under WinXP, and am converting it to work with SPTI under 64-bit Windows 7. Here's the problematic code - and what's happening is, sptwb.spt.ScsiStatus is 2, which is a "Check Condition" error. Unfortunately, the device in question doesn't return useful information when you do a "Request Sense" after this problem occurs, so that's no help. void MSSModeSelect(const ModeSelectRequestPacket& inRequest, StatusResponsePacket& outResponse) { IPC_LOG("MSSModeSelect(): PathID=%d, TargetID=%d, LUN=%d", inRequest.m_Device.m_PathId, inRequest.m_Device.m_TargetId, inRequest.m_Device.m_Lun); int adapterIndex = inRequest.m_Device.m_PathId; HANDLE adapterHandle = prvOpenScsiAdapter(inRequest.m_Device.m_PathId); if (adapterHandle == INVALID_HANDLE_VALUE) { outResponse.m_Status = eScsiAdapterErr; return; } SCSI_PASS_THROUGH_WITH_BUFFERS sptwb; memset(&sptwb, 0, sizeof(sptwb)); #define MODESELECT_BUF_SIZE 32 sptwb.spt.Length = sizeof(SCSI_PASS_THROUGH); sptwb.spt.PathId = inRequest.m_Device.m_PathId; sptwb.spt.TargetId = inRequest.m_Device.m_TargetId; sptwb.spt.Lun = inRequest.m_Device.m_Lun; sptwb.spt.CdbLength = CDB6GENERIC_LENGTH; sptwb.spt.SenseInfoLength = 0; sptwb.spt.DataIn = SCSI_IOCTL_DATA_IN; sptwb.spt.DataTransferLength = MODESELECT_BUF_SIZE; sptwb.spt.TimeOutValue = 2; sptwb.spt.DataBufferOffset = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf); sptwb.spt.Cdb[0] = SCSIOP_MODE_SELECT; sptwb.spt.Cdb[4] = MODESELECT_BUF_SIZE; DWORD length = offsetof(SCSI_PASS_THROUGH_WITH_BUFFERS,ucDataBuf) + sptwb.spt.DataTransferLength; memset(sptwb.ucDataBuf, 0, sizeof(sptwb.ucDataBuf)); sptwb.ucDataBuf[2] = 0x10; sptwb.ucDataBuf[16] = (BYTE)0x01; ULONG bytesReturned = 0; BOOL okay = DeviceIoControl(adapterHandle, IOCTL_SCSI_PASS_THROUGH, &sptwb, sizeof(SCSI_PASS_THROUGH), &sptwb, length, &bytesReturned, FALSE); DWORD gle = GetLastError(); IPC_LOG(" DeviceIoControl() %s", okay ? "worked" : "failed"); if (okay) { outResponse.m_Status = (sptwb.spt.ScsiStatus == 0) ? eOk : ePrinterStatusErr; } else { outResponse.m_Status = eScsiPermissionsErr; } CloseHandle(adapterHandle); } A few more remarks, for what it's worth: This is derived from some old ASPI code that does the "Mode Select" flawlessly. This routine opens \\.\SCSI<whatever> at the beginning, via prvOpenScsiAdapter(), and closes the handle at the end. All the other routines for dealing with the device do the same thing, including the routine to do "Reserve Unit". Is this a good idea under SPTI, or should the call to "Reserve Unit" leave the handle open, so this routine and others in the sequence can use the same handle? This uses the IOCTL_SCSI_PASS_THROUGH. Should "Mode Select" use IOCTL_SCSI_PASS_THROUGH_DIRECT instead? Thanks in advance - any help will be greatly appreciated.

    Read the article

  • News From EAP Testing

    - by Fatherjack
    There is a phrase that goes something like “Watch the pennies and the pounds/dollars will take care of themselves”, meaning that if you pay attention to the small things then the larger things are going to fare well too. I am lucky enough to be a Friend of Red Gate and once in a while I get told about new features in their tools and have a test copy of the software to trial. I got one of those emails a week or so ago and I have been exploring the SQL Prompt 6 EAP since then. One really useful feature of long standing in SQL Prompt is the idea of a code snippet that is automatically pasted into the SSMS editor when you type a few key letters. For example I can type “ssf” and then press the tab key and the text is expanded to SELECT * FROM. There are lots of these combinations and it is possible to create your own really easily. To create your own you use the Snippet Manager interface to define the shortcut letters and the code that you want to have put in their place. Let’s look at an example. Say I am writing a blog about something and want to have the demo code create a temporary table. It might looks like this; The first time you run the code everything is fine, a lovely set of dates fill the results grid but run it a second time and this happens.   Yep, we didn’t destroy the temporary table so the CREATE statement fails when it finds the table already exists. No matter, I have a snippet created that takes care of this.   Nothing too technical here but you will see that in the Code section there is $CURSOR$, this isn’t a TSQL keyword but a marker for SQL Prompt to place the cursor in that position when the Code is pasted into the SSMS Editor. I just place my cursor above the CREATE statement and type “ifobj” – the shortcut for my code to DROP the temporary table – which has been defined in the Snippet Manager as below. This means I am right-away ready to type the name of the offending table. Pretty neat and it’s been very useful in saving me lots of time over many years.   The news for SQL Prompt 6 is that Red Gate have added a new Snippet Command of $PASTE$. Let’s alter our snippet to the following and try it out   Once again, we will type type “ifobj” in the SSMS Editor but first of all, highlight the name of the table #TestTable and copy it to your clipboard. Now type “ifobj” and press Tab… Wherever the string $PASTE$ is placed in the snippet, the contents of your clipboard are merged into the pasted TSQL. This means I don’t need to type the table name into the code snippet, it’s already there and I am seeing a fully functioning piece of TSQL ready to run. This means it is it even easier to write TSQL quickly and consistently. Attention to detail like this from Red Gate means that their developer tools stay on track to keep winning awards year after year and help take the hard work out of writing neat, accurate TSQL. If you want to try out SQL Prompt all the details are at http://www.red-gate.com/products/sql-development/sql-prompt/.

    Read the article

  • Workarounds for supporting MVVM in the Silverlight ContextMenu service

    - by cibrax
    As I discussed in my last post, some of the Silverlight controls does not support MVVM quite well out of the box without specific customizations. The Context Menu is another control that requires customizations for enabling data binding on the menu options. There are a few things that you might want to expose as view model for a menu item, such as the Text, the associated icon or the command that needs to be executed. That view model should look like this, public class MenuItemModel { public string Name { get; set; } public ICommand Command { get; set; } public Image Icon { get; set; } public object CommandParameter { get; set; } } This is how you can modify the built-in control to support data binding on the model above, public class CustomContextMenu : ContextMenu { protected override DependencyObject GetContainerForItemOverride() { CustomMenuItem item = new CustomMenuItem(); Binding commandBinding = new Binding("Command"); item.SetBinding(CustomMenuItem.CommandProperty, commandBinding);   Binding commandParameter = new Binding("CommandParameter"); item.SetBinding(CustomMenuItem.CommandParameterProperty, commandParameter);   return item; } }   public class CustomMenuItem : MenuItem { protected override DependencyObject GetContainerForItemOverride() { CustomMenuItem item = new CustomMenuItem();   Binding commandBinding = new Binding("Command"); item.SetBinding(CustomMenuItem.CommandProperty, commandBinding);   return item; } } The change is very similar to the one I made in the TreeView for manually data binding some of the Menu item properties to the model. Once you applied that change in the control, you can define it in your XAML like this. <toolkit:ContextMenuService.ContextMenu> <e:CustomContextMenu ItemsSource="{Binding MenuItems}"> <e:CustomContextMenu.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" > <ContentPresenter Margin="0 0 4 0" Content="{Binding Icon}" /> <TextBlock Margin="0" Text="{Binding Name, Mode=OneWay}" FontSize="12"/> </StackPanel> </DataTemplate> </e:CustomContextMenu.ItemTemplate> </e:CustomContextMenu> </toolkit:ContextMenuService.ContextMenu> The property MenuItems associated to the “ItemsSource” in the parent model just returns a list of supported options (menu items) in the context menu. this.menuItems = new MenuItemModel[] { new MenuItemModel { Name = "My Command", Command = new RelayCommand(OnCommandClick), Icon = ImageLoader.GetIcon("command.png") } }; The only problem I found so far with this approach is that the context menu service does not support a HierarchicalDataTemplate in case you want to have an hierarchy in the context menu (MenuItem –> Sub menu items), but I guess we can live without that.

    Read the article

  • C# 5 Async, Part 3: Preparing Existing code For Await

    - by Reed
    While the Visual Studio Async CTP provides a fantastic model for asynchronous programming, it requires code to be implemented in terms of Task and Task<T>.  The CTP adds support for Task-based asynchrony to the .NET Framework methods, and promises to have these implemented directly in the framework in the future.  However, existing code outside the framework will need to be converted to using the Task class prior to being usable via the CTP. Wrapping existing asynchronous code into a Task or Task<T> is, thankfully, fairly straightforward.  There are two main approaches to this. Code written using the Asynchronous Programming Model (APM) is very easy to convert to using Task<T>.  The TaskFactory class provides the tools to directly convert APM code into a method returning a Task<T>.  This is done via the FromAsync method.  This method takes the BeginOperation and EndOperation methods, as well as any parameters and state objects as arguments, and returns a Task<T> directly. For example, we could easily convert the WebRequest BeginGetResponse and EndGetResponse methods into a method which returns a Task<WebResponse> via: Task<WebResponse> task = Task.Factory .FromAsync<WebResponse>( request.BeginGetResponse, request.EndGetResponse, null); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Event-based Asynchronous Pattern (EAP) code can also be wrapped into a Task<T>, though this requires a bit more effort than the one line of code above.  This is handled via the TaskCompletionSource<T> class.  MSDN provides a detailed example of using this to wrap an EAP operation into a method returning Task<T>.  It demonstrates handling cancellation and exception handling as well as the basic operation of the asynchronous method itself. The basic form of this operation is typically: Task<YourResult> GetResultAsync() { var tcs = new TaskCompletionSource<YourResult>(); // Handle the event, and setup the task results... this.GetResultCompleted += (o,e) => { if (e.Error != null) tcs.TrySetException(e.Error); else if (e.Cancelled) tcs.TrySetCanceled(); else tcs.TrySetResult(e.Result); }; // Call the asynchronous method this.GetResult(); // Return the task from the TaskCompletionSource return tcs.Task; } We can easily use these methods to wrap our own code into a method that returns a Task<T>.  Existing libraries which cannot be edited can be extended via Extension methods.  The CTP uses this technique to add appropriate methods throughout the framework. The suggested naming for these methods is to define these methods as “Task<YourResult> YourClass.YourOperationAsync(…)”.  However, this naming often conflicts with the default naming of the EAP.  If this is the case, the CTP has standardized on using “Task<YourResult> YourClass.YourOperationTaskAsync(…)”. Once we’ve wrapped all of our existing code into operations that return Task<T>, we can begin investigating how the Async CTP can be used with our own code.

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • Find Knowledge Quickly

    - by Get Proactive Customer Adoption Team
    Untitled Document Get to relevant knowledge on the Oracle products you use in a few quick steps! Customers tell us that the volume of search results returned can make it difficult to find the information they need, especially when similar Oracle products exist. These simple tips show you how to filter, browse, search, and refine your results to get relevant answers faster. Filter first: PowerView is your best friend Powerview is an often ignored feature of My Oracle Support that enables you to control the information displayed on the Dashboard, the Knowledge tab and regions, and the Service Request tab based on one or more parameters. You can define a PowerView to limit information based on product, product line, support ID, platform, hostname, system name and others. Using PowerView allows you to restrict: Your search results to the filters you have set The product list when selecting your products in Search & Browse and when creating service requests   The PowerView menu is at the top of My Oracle Support, near the title You turn PowerView on by clicking PowerView is Off, which is a button. When PowerView is On, and filters are active, clicking the button again will toggle Powerview off. Click the arrow to the right to create new filters, edit filters, remove a filter, or choose from the list of previously created filters. You can create a PowerView in 3 simple steps! Turn PowerView on and select New from the PowerView menu. Select your filter from the Select Filter Type dropdown list and make selections from the other two menus. Hint: While there are many filter options, selecting your product line or your list of products will provide you with an effective filter. Click the plus sign (+) to add more filters. Click the minus sign (-) to remove a filter. Click Create to save and activated the filter(s) You’ll notice that PowerView is On displays along with the active filters. For more information about the PowerView capabilities, click the Learn more about PowerView… menu item or view a short video. Browse & Refine: Access the Best Match Fast For Your Product and Task In the Knowledge Browse region of the Knowledge or Dashboard tabs, pick your product, pick your task, select a version, if applicable. A best match document – a collection of knowledge articles and resources specific to your selections - may display, offering you a one-stop shop. The best match document, called an “information center,” is an aggregate of dynamically updated links to information pertinent to the product, task, and version (if applicable) you chose. These documents are refreshed every 24 hours to ensure that you have the most current information at your fingertips. Note: Not all products have “information centers.” If no information center appears as a best match, click Search to see a list of search results. From the information center, you can access topics from a product overview to security information, as shown in the left menu. Just want to search? That’s easy too! Again, pick your product, pick your task, select a version, if applicable, enter a keyword term, and click Search. Hint: In this example, you’ll notice that PowerView is on and set to PeopleSoft Enterprise. When PowerView is on and you select a product from the Knowledge Base product list, the listed products are limited to the active PowerView filter. (Products you’ve previously picked are also listed at the top of the dropdown list.) Your search results are displayed based on the parameters you entered. It’s that simple! Related Information: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community For more tips on using My Oracle Support, check out these short video training modules. My Oracle Support Speed Video Training [ID 603505.1]

    Read the article

  • problem on my site [closed]

    - by ali
    Errors found while checking this document as XHTML How and from where repaired ,please help my hello, I have a problem on my site, was discovered by the program (A1 website analyzer) you can fix my site? Because I really do not know about the programming language, if it please email me and thank you please see this link it http://jigsaw.w3.org/css-validator/validator?uri=http://www.moneybillion.com/#errors Validation Output: 19 Errors Line 193, Column 261: there is no attribute "data-count" …ass="twitter-share-button" data-count="horizontal" data-via="elibeto1199"Twee…? You have used the attribute named above in your document, but the document type you are using does not support that attribute for this element. This error is often caused by incorrect use of the "Strict" document type with a document that uses frames (e.g. you must use the "Transitional" document type to get the "target" attribute), or by using vendor proprietary extensions such as "marginheight" (this is usually fixed by using CSS to achieve the desired effect instead). This error may also result if the element itself is not supported in the document type you are using, as an undefined element will have no supported attributes; in this case, see the element-undefined error message for further information. How to fix: check the spelling and case of the element and attribute, (Remember XHTML is all lower-case) and/or check that they are both allowed in the chosen document type, and/or use CSS instead of this attribute. If you received this error when using the element to incorporate flash media in a Web page, see the FAQ item on valid flash. Line 193, Column 283: there is no attribute "data-via" …ton" data-count="horizontal" data-via="elibeto1199"Tweet This error may also result if the element itself is not supported in the document type you are using, as an undefined element will have no supported attributes; in this case, see the element-undefined error message for further information. How to fix: check the spelling and case of the element and attribute, (Remember XHTML is all lower-case) and/or check that they are both allowed in the chosen document type, and/or use CSS instead of this attribute. If you received this error when using the element to incorporate flash media in a Web page, see the FAQ item on valid flash. Line 193, Column 393: required attribute "type" not specified …ript" src="//platform.twitter.com/widgets.js"var facebook = {? The attribute given above is required for an element that you've used, but you have omitted it. For instance, in most HTML and XHTML document types the "type" attribute is required on the "script" element and the "alt" attribute is required for the "img" element. Typical values for type are type="text/css" for and type="text/javascript" for . Line 194, Column 23: element "data:post.url" undefined url : "",? You have used the element named above in your document, but the document type you are using does not define an element of that name. This error is often caused by: incorrect use of the "Strict" document type with a document that uses frames (e.g. you must use the "Frameset" document type to get the "" element), by using vendor proprietary extensions such as "" or "" (this is usually fixed by using CSS to achieve the desired effect instead). by using upper-case tags in XHTML (in XHTML attributes and elements must be all lower-case). Line 197, Column 62: required attribute "type" not specified src="http://orkut-share.googlecode.com/svn/trunk/facebook.js"?

    Read the article

  • Checking form input data on submit with pure PHP [migrated]

    - by Leron
    I have some experience with PHP but I have never even try to do this wit pure PHP, but now a friend of mine asked me to help him with this task so I sat down and write some code. What I'm asking is for opinion if this is the right way to do this when you want to use only PHP and is there anything I can change to make the code better. Besides that I think the code is working at least with the few test I made with it. Here it is: <?php session_start(); // define variables and initialize with empty values $name = $address = $email = ""; $nameErr = $addrErr = $emailErr = ""; $_SESSION["name"] = $_SESSION["address"] = $_SESSION["email"] = ""; $_SESSION["first_page"] = false; if ($_SERVER["REQUEST_METHOD"] == "POST") { if (empty($_POST["name"])) { $nameErr = "Missing"; } else { $_SESSION["name"] = $_POST["name"]; $name = $_POST["name"]; } if (empty($_POST["address"])) { $addrErr = "Missing"; } else { $_SESSION["address"] = $_POST["address"]; $address = $_POST["address"]; } if (empty($_POST["email"])) { $emailErr = "Missing"; } else { $_SESSION["email"] = $_POST["email"]; $email = $_POST["email"]; } } if ($_SESSION["name"] != "" && $_SESSION["address"] != "" && $_SESSION["email"] != "") { $_SESSION["first_page"] = true; header('Location: http://localhost/formProcessing2.php'); //echo $_SESSION["name"]. " " .$_SESSION["address"]. " " .$_SESSION["email"]; } ?> <DCTYPE! html> <head> <style> .error { color: #FF0000; } </style> </head> <body> <form method="POST" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>"> Name <input type="text" name="name" value="<?php echo htmlspecialchars($name);?>"> <span class="error"><?php echo $nameErr;?></span> <br /> Address <input type="text" name="address" value="<?php echo htmlspecialchars($address);?>"> <span class="error"><?php echo $addrErr;?></span> <br /> Email <input type="text" name="email" value="<?php echo htmlspecialchars($email);?>"> <span class="error"><?php echo $emailErr;?></span> <br /> <input type="submit" name="submit" value="Submit"> </form> </body> </html>

    Read the article

  • Drawing random smooth lines contained in a square [migrated]

    - by Doug Mercer
    I'm trying to write a matlab function that creates random, smooth trajectories in a square of finite side length. Here is my current attempt at such a procedure: function [] = drawroutes( SideLength, v, t) %DRAWROUTES Summary of this function goes here % Detailed explanation goes here %Some parameters intended to help help keep the particles in the box RandAccel=.01; ConservAccel=0; speedlimit=.1; G=10^(-8); % %Initialize Matrices Ax=zeros(v,10*t); Ay=Ax; vx=Ax; vy=Ax; x=Ax; y=Ax; sx=zeros(v,1); sy=zeros(v,1); % %Define initial position in square x(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); y(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); % for i=2:10*t %Measure minimum particle distance component wise from boundary %for each vehicle BorderGravX=[abs(SideLength*ones(v,1)-x(:,i-1)),abs(x(:,i-1))]'; BorderGravY=[abs(SideLength*ones(v,1)-y(:,i-1)),abs(y(:,i-1))]'; rx=min(BorderGravX)'; ry=min(BorderGravY)'; % %Set the sign of the repulsive force for k=1:v if x(k,i)<.5*SideLength sx(k)=1; else sx(k)=-1; end if y(k,i)<.5*SideLength sy(k)=1; else sy(k)=-1; end end % %Calculate Acceleration w/ random "nudge" and repulive force Ax(:,i)=ConservAccel*Ax(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sx*G./rx.^2; Ay(:,i)=ConservAccel*Ay(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sy*G./ry.^2; % %Ad hoc method of trying to slow down particles from jumping outside of %feasible region for h=1:v if abs(vx(h,i-1)+Ax(h,i))<speedlimit vx(h,i)=vx(h,i-1)+Ax(h,i); elseif (vx(h,i-1)+Ax(h,i))<-speedlimit vx(h,i)=-speedlimit; else vx(h,i)=speedlimit; end end for h=1:v if abs(vy(h,i-1)+Ay(h,i))<speedlimit vy(h,i)=vy(h,i-1)+Ay(h,i); elseif (vy(h,i-1)+Ay(h,i))<-speedlimit vy(h,i)=-speedlimit; else vy(h,i)=speedlimit; end end % %Update position x(:,i)=x(:,i-1)+(vx(:,i-1)+vx(:,i))/2; y(:,i)=y(:,i-1)+(vy(:,i-1)+vy(:,1))/2; % end %Plot position clf; hold on; axis([-100,SideLength+100,-100,SideLength+100]); cc=hsv(v); for j=1:v plot(x(j,1),y(j,1),'ko') plot(x(j,:),y(j,:),'color',cc(j,:)) end hold off; % end My original plan was to place particles within a square, and move them around by allowing their acceleration in the x and y direction to be governed by a uniformly distributed random variable. To keep the particles within the square, I tried to create a repulsive force that would push the particles away from the boundaries of the square. In practice, the particles tend to leave the desired "feasible" region after a relatively small number of time steps (say, 1000)." I'd love to hear your suggestions on either modifying my existing code or considering the problem from another perspective. When reading the code, please don't feel the need to get hung up on any of the ad hoc parameters at the very beginning of the script. They seem to help, but I don't believe any beside the "G" constant should truly be necessary to make this system work. Here is an example of the current output: Many of the vehicles have found their way outside of the desired square region, [0,400] X [0,400].

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Rotating 2D Object

    - by Vico Pelaez
    Well I am trying to learn openGL and want to make a triangle move one unit (0.1) everytime I press one of the keyboard arrows. However i want the triangle to turn first pointing the direction where i will move one unit. So if my triangle is pointing up and I press right the it should point right first and then move one unit in the x axis. I have implemented the code to move the object one unit in any direction, however I can not get it to turn pointing to the direction it is going. The initial position of the Triangle is pointing up. #define LENGTH 0.05 float posX = -0.5, posY = -0.5, posZ = 0; float inX = 0.0 ,inY = 0.0 ,inZ = 0.0; // what values???? void rect(){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glPushMatrix(); glTranslatef(posX,posY,posZ); glRotatef(rotate, inX, inY, inZ); glBegin(GL_TRIANGLES); glColor3f(0.0, 0.0, 1.0); glVertex2f(-LENGTH,-LENGTH); glVertex2f(LENGTH-LENGTH, LENGTH); glVertex2f(LENGTH, -LENGTH); glEnd(); glPopMatrix(); } void display(){ //Clear Window glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); rect(); glFlush(); } void init(){ glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); } float move_unit = 0.01; bool change = false; void keyboardown(int key, int x, int y) { switch (key){ case GLUT_KEY_UP: if(rotate = 0) posY += move_unit; else{ inX = 1.0; rotate = 0; } break; case GLUT_KEY_RIGHT: if(rotate = -90) posX += move_unit; else{ inX = 1.0; // is this value ok?? rotate -= 90; } break; case GLUT_KEY_LEFT: if(rotate = 90) posX -= move_unit; else{ inX = 1.0; // is this value ok??? rotate += 90; } break; case GLUT_KEY_DOWN: if(rotate = 180) posY -= move_unit; else{ inX = 1.0; rotate += 180; } break; case 27: // Escape button exit(0); break; default: break; } glutPostRedisplay(); } int main(int argc, char** argv){ glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500,500); glutInitWindowPosition(0, 0); glutCreateWindow("Triangle turn"); glutSpecialFunc(keyboardown); glutDisplayFunc(display); init(); glutMainLoop()

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Bowing to User Experience

    As a consumer of geeky news it is hard to check my Google Reader without running into two or three posts about Apples iPad and in particular the changes to the developer guidelines which seemingly restrict developers to using Apples Xcode tool and Objective-C language for iPad apps. One of the alternatives to Objective-C affected, is MonoTouch, an option with some appeal to me as it is based on the Mono implementation of C#. Seemingly restricted is the key word here, as far as I can tell, no official announcement has been made about its fate. For more details around MonoTouch for iPhone OS, check out Miguel de Icazas post: http://tirania.org/blog/archive/2010/Apr-28.html. These restrictions have provoked some outrage as the perception is that Apple is arrogantly restricting developers freedom to create applications as they choose and perhaps unwittingly shortchanging iPhone/iPad users who wont benefit from these now never-to-be-made great applications. Apples response has mostly been to say they are concentrating on providing a certain user experience to their customers, and to do this, they insist everyone uses the tools they approve. Which isnt a surprising line of reasoning given Apple restricts the hardware used and content of the apps already. The vogue term for this approach is curated, as in a benevolent museum director selecting only the finest artifacts for display or a wise gardener arranging the plants in a garden just so. If this is what a curated experience is like it is hard to argue that consumers are not responding. My iPhone is probably the most satisfying piece of technology I own. Coming from the Razr, it really was an revolution in how the form factor, interface and user experience all tied together. While the curated approach reinvented the smart phone genre, it is easy to forget that this is not a new approach for Apple. Macbooks and Macs are Apple hardware that run Apple software. And theyve been successful, but not quite in the same way as the iPhone or iPad (based on early indications). Why not? Well a curated approach can only be wildly successful if the curator a) makes the right choices and b) offers choices that no one else has. Although its advantages are eroding, the iPhone was different from other phones, a unique, focused, touch-centric experience. The iPad is an attempt to define another category of computing. Macs and Macbooks are great devices, but are not fundamentally a different user experience than a PC, you still have windows, file folders, mouse and keyboard, and similar applications. So the big question for Apple is can they hold on to their market advantage, continuing innovating in user experience and stay on top? Or are they going be like Xerox, and the rest of the world says thank you for the windows metaphor, now let me implement that better? It will be exciting to watch, with Android already a viable competitor and Microsoft readying Windows Phone 7. And to close the loop back to the restrictions on developing for iPhone OS. At this point the main target appears to be Adobe and Adobe Flash. Apples calculation is that a) they dont need those developers or b) the developers they want will learn Apples stuff anyway. My guess is that they are correct; that as much as I like the idea of developers having more options, I am not going to buy a competitors product to spite Apple unless that product is just as usable. For a non-technical consumer, I dont know that this conversation even factors into the buying decision. If it did, wed be talking about how Microsoft is trying to retake a slice of market share from the behemoth that is Linux.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Optimal Data Structure for our own API

    - by vermiculus
    I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as stack-api/cache. So, without further ado, stack-api/cache is a list of conses keyed by last update: `(<csite> <csite> <csite>) where <csite> would be (1362501715 . <site>) At this point, all we've done is define a simple association list. Of course, we must go deeper. Each <site> is a list of the API parameter (unique) followed by a list questions: `("codereview" <cquestion> <cquestion> <cquestion>) Each <cquestion> is, you guessed it, a cons of questions with their last update time: `(1362501715 <question>) (1362501720 . <question>) <question> is a cons of a question structure and a list of answers (again, consed with their last update time): `(<question-structure> <canswer> <canswer> <canswer> and ` `(1362501715 . <answer-structure>) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love at all). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a <csite>, for example, would just turn into (<epoch-time> <api-param> <cquestion> <cquestion> ...) Concerns: Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. (I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.) Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?

    Read the article

  • MySQL Connector/Net 6.6.2 has been released

    - by fernando
    MySQL Connector/Net 6.6.2, a new version of the all-managed .NET driver for MySQL has been released.  This is the first of two beta releases intended to introduce users to the new features in the release.  This release is feature complete it should be stable enough for users to understand the new features and how we expect them to work.  As is the case with all non-GA releases, it should not be used in any production environment.  It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.6 version of MySQL Connector/Net brings the following new features:   * Stored routine debugging   * Entity Framework 4.3 Code First support   * Pluggable authentication (now third parties can plug new authentications mechanisms into the driver).   * Full Visual Studio 2012 support: everything from Server Explorer to Intellisense & the Stored Routine debugger. Stored Procedure Debugging ------------------------------------------- We are very excited to introduce stored procedure debugging into our Visual Studio integration.  It works in a very intuitive manner by simply clicking 'Debug Routine' from Server Explorer. You can debug stored routines, functions & triggers. Some of the new features in this release include:   * Besides normal breakpoints, you can define conditional & pass count breakpoints.   * Now the debugger editor shows colorizing.   * Now you can change the values of locals in a function scope (previously caused deadlock due to functions executing within their own transaction).   * Now you can also debug triggers for 'replace' sql statements.   * In general anything related to locals, watches, breakpoints, stepping & call stack should work in a similar way to the C#'s Visual Studio debugger. Some limitations remains, due to the current debugger architecture:   * Some MySQL functions cannot be debugged currently (get_lock, release_lock, begin, commit, rollback, set transaction level)..   * Only one debug session may be active on a given server. The Debugger is feature complete at this point. We look forward to your feedback. Documentation ------------------------------------- The documentation is still being developed and will be readily available soon (before Beta 2).  You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • Why is x=x++ undefined?

    - by ugoren
    It's undefined because the it modifies x twice between sequence points. The standard says it's undefined, therefore it's undefined. That much I know. But why? My understanding is that forbidding this allows compilers to optimize better. This could have made sense when C was invented, but now seems like a weak argument. If we were to reinvent C today, would we do it this way, or can it be done better? Or maybe there's a deeper problem, that makes it hard to define consistent rules for such expressions, so it's best to forbid them? So suppose we were to reinvent C today. I'd like to suggest simple rules for expressions such as x=x++, which seem to me to work better than the existing rules. I'd like to get your opinion on the suggested rules compared to the existing ones, or other suggestions. Suggested Rules: Between sequence points, order of evaluation is unspecified. Side effects take place immediately. There's no undefined behavior involved. Expressions evaluate to this value or that, but surely won't format your hard disk (strangely, I've never seen an implementation where x=x++ formats the hard disk). Example Expressions x=x++ - Well defined, doesn't change x. First, x is incremented (immediately when x++ is evaluated), then it's old value is stored in x. x++ + ++x - Increments x twice, evaluates to 2*x+2. Though either side may be evaluated first, the result is either x + (x+2) (left side first) or (x+1) + (x+1) (right side first). x = x + (x=3) - Unspecified, x set to either x+3 or 6. If the right side is evaluated first, it's x+3. It's also possible that x=3 is evaluated first, so it's 3+3. In either case, the x=3 assignment happens immediately when x=3 is evaluated, so the value stored is overwritten by the other assignment. x+=(x=3) - Well defined, sets x to 6. You could argue that this is just shorthand for the expression above. But I'd say that += must be executed after x=3, and not in two parts (read x, evaluate x=3, add and store new value). What's the Advantage? Some comments raised this good point. It's not that I'm after the pleasure of using x=x++ in my code. It's a strange and misleading expression. What I want is to be able to understand complicated expressions. Normally, a complicated expression is no more than the sum of its parts. If you understand the parts and the operators combining them, you can understand the whole. C's current behavior seems to deviate from this principle. One assignment plus another assignment suddenly doesn't make two assignments. Today, when I look at x=x++, I can't say what it does. With my suggested rules, I can, by simply examining its components and their relations.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >