Search Results

Search found 3118 results on 125 pages for 'fragment caching'.

Page 115/125 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • Thread.Interrupt Is Evil

    - by Alois Kraus
    Recently I have found an interesting issue with Thread.Interrupt during application shutdown. Some application was crashing once a week and we had not really a clue what was the issue. Since it happened not very often it was left as is until we have got some memory dumps during the crash. A memory dump usually means WindDbg which I really like to use (I know I am one of the very few fans of it).  After a quick analysis I did find that the main thread already had exited and the thread with the crash was stuck in a Monitor.Wait. Strange Indeed. Running the application a few thousand times under the debugger would potentially not have shown me what the reason was so I decided to what I call constructive debugging. I did create a simple Console application project and try to simulate the exact circumstances when the crash did happen from the information I have via memory dump and source code reading. The thread that was  crashing was actually MS code from an old version of the Microsoft Caching Application Block. From reading the code I could conclude that the main thread did call the Dispose method on the CacheManger class which did call Thread.Interrupt on the cache scavenger thread which was just waiting for work to do. My first version of the repro looked like this   static void Main(string[] args) { Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); } static void ThreadFunc() { while (true) { object value = Dequeue(); // block until unblocked or awaken via ThreadInterruptedException } } static object WaitObject = new object(); static object Dequeue() { object lret = "got value"; try { lock (WaitObject) { } } catch (ThreadInterruptedException) { Console.WriteLine("Got ThreadInterruptException"); lret = null; } return lret; } I do start a background thread and call Thread.Interrupt on it and then directly let the application terminate. The thread in the meantime does plenty of Monitor.Enter/Leave calls to simulate work on it. This first version did not crash. So I need to dig deeper. From the memory dump I did know that the finalizer thread was doing just some critical finalizers which were closing file handles. Ok lets add some long running finalizers to the sample. class FinalizableObject : CriticalFinalizerObject { ~FinalizableObject() { Console.WriteLine("Hi we are waiting to finalize now and block the finalizer thread for 5s."); Thread.Sleep(5000); } } class Program { static void Main(string[] args) { FinalizableObject fin = new FinalizableObject(); Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); GC.KeepAlive(fin); // prevent finalizing it too early // After leaving main the other thread is woken up via Thread.Abort // while we are finalizing. This causes a stackoverflow in the CLR ThreadAbortException handling at this time. } With this changed Main method and a blocking critical finalizer I did get my crash just like the real application. The funny thing is that this is actually a CLR bug. When the main method is left the CLR does suspend all threads except the finalizer thread and declares all objects as garbage. After the normal finalizers were called the critical finalizers are executed to e.g. free OS handles (usually). Remember that I did call Thread.Interrupt as one of the last methods in the Main method. The Interrupt method is actually asynchronous and does wake a thread up and throws a ThreadInterruptedException only once unlike Thread.Abort which does rethrow the exception when an exception handling clause is left. It seems that the CLR does not expect that a frozen thread does wake up again while the critical finalizers are executed. While trying to raise a ThreadInterrupedException the CLR goes down with an stack overflow. Ups not so nice. Why has this nobody noticed for years is my next question. As it turned out this error does only happen on the CLR for .NET 4.0 (x86 and x64). It does not show up in earlier or later versions of the CLR. I have reported this issue on connect here but so far it was not confirmed as a CLR bug. But I would be surprised if my console application was to blame for a stack overflow in my test thread in a Monitor.Wait call. What is the moral of this story? Thread.Abort is evil but Thread.Interrupt is too. It is so evil that even the CLR of .NET 4.0 contains a race condition during the CLR shutdown. When the CLR gurus can get it wrong the chances are high that you get it wrong too when you use this constructs. If you do not believe me see what Patrick Smacchia does blog about Thread.Abort and List.Sort. Not only the CLR creators can get it wrong. The BCL writers do sometimes have a hard time with correct exception handling as well. If you do tell me that you use Thread.Abort frequently and never had problems with it I do suspect that you do not have looked deep enough into your application to find such sporadic errors.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by Pawz
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. I am using "fragment 1400" and "mssfix" on all three machines. An mtu-test on both connections is successful. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ? FWIW, pinging 10.0.1.253 from Kino has the same result - the traffic does not get forwarded. UPDATE: I've found that I can only ping certain IP's on Guchuko's LAN from Outpost. From Outpost I can ping 192.168.0.3 and 192.168.0.2, but 192.168.99 and 192.168.0.4 are unreachable. The same tcpdump behavior can be seen. I think this means the problem can't be due to ipforwarding or routing, because Outpost can reach SOME hosts on Guchuko's LAN but not others and likewise, Guchuko can reach two hosts on Outpost's LAN, but not others. This baffles me.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • Using xsl:variable in a xsl:foreach select statment

    - by Nefariousity
    I'm trying to iterate through an xml document using xsl:foreach but I need the select=" " to be dynamic so I'm using a variable as the source. Here's what I've tried: ... <xsl:template name="SetDataPath"> <xsl:param name="Type" /> <xsl:variable name="Path_1">/Rating/Path1/*</xsl:variable> <xsl:variable name="Path_2">/Rating/Path2/*</xsl:variable> <xsl:if test="$Type='1'"> <xsl:value-of select="$Path_1"/> </xsl:if> <xsl:if test="$Type='2'"> <xsl:value-of select="$Path_2"/> </xsl:if> <xsl:template> ... <!-- Set Data Path according to Type --> <xsl:variable name="DataPath"> <xsl:call-template name="SetDataPath"> <xsl:with-param name="Type" select="/Rating/Type" /> </xsl:call-template> </xsl:variable> ... <xsl:for-each select="$DataPath"> ... The foreach threw an error stating: "XslTransformException - To use a result tree fragment in a path expression, first convert it to a node-set using the msxsl:node-set() function." When I use the msxsl:node-set() function though, my results are blank. I'm aware that I'm setting $DataPath to a string, but shouldn't the node-set() function be creating a node set from it? Am I missing something? When I don't use a variable: <xsl:for-each select="/Rating/Path1/*"> I get the proper results. Here's the XML data file I'm using: <Rating> <Type>1</Type> <Path1> <sarah> <dob>1-3-86</dob> <user>Sarah</user> </sarah> <joe> <dob>11-12-85</dob> <user>Joe</user> </joe> </Path1> <Path2> <jeff> <dob>11-3-84</dob> <user>Jeff</user> </jeff> <shawn> <dob>3-5-81</dob> <user>Shawn</user> </shawn> </Path2> </Rating> My question is simple, how do you run a foreach on 2 different paths?

    Read the article

  • I am trying to deploy my first rails app using Capistrano and am getting an error.

    - by Andrew Bucknell
    My deployment of a rails app with capistrano is failing and I hoping someone can provide me with pointers to troubleshoot. The following is the command output andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:setup * executing `deploy:setup' * executing "mkdir -p /var/www/dev/guestbook2 /var/www/dev/guestbook2/releases /var/www/dev/guestbook2/shared /var/www/dev/guestbook2/shared/system /var/www/dev/guestbook2/shared/log /var/www/dev/guestbook2/shared/pids && chmod g+w /var/www/dev/guestbook2 /var/www/dev/guestbook2/releases /var/www/dev/guestbook2/shared /var/www/dev/guestbook2/shared/system /var/www/dev/guestbook2/shared/log /var/www/dev/guestbook2/shared/pids" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command command finished andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:check * executing `deploy:check' * executing "test -d /var/www/dev/guestbook2/releases" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2/releases" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "which git" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished * executing "test -w /var/www/dev/guestbook2/shared" servers: ["dev.andrewbucknell.com"] [dev.andrewbucknell.com] executing command command finished You appear to have all necessary dependencies installed andrew@melb-web:~/projects/rails/guestbook2$ cap deploy:migrations * executing `deploy:migrations' * executing `deploy:update_code' updating the cached checkout on all servers executing locally: "git ls-remote [email protected]:/home/andrew/git/guestbook2.git master" Enter passphrase for key '/home/andrew/.ssh/id_dsa': * executing "if [ -d /var/www/dev/guestbook2/shared/cached-copy ]; then cd /var/www/dev/guestbook2/shared/cached-copy && git fetch origin && git reset --hard 369c5e04aaf83ad77efbfba0141001ac90915029 && git clean -d -x -f; else git clone [email protected]:/home/andrew/git/guestbook2.git /var/www/dev/guestbook2/shared/cached-copy && cd /var/www/dev/guestbook2/shared/cached-copy && git checkout -b deploy 369c5e04aaf83ad77efbfba0141001ac90915029; fi" servers: ["dev.andrewbucknell.com"] Enter passphrase for /home/andrew/.ssh/id_dsa: [dev.andrewbucknell.com] executing command ** [dev.andrewbucknell.com :: err] Permission denied, please try again. ** Permission denied, please try again. ** Permission denied (publickey,password). ** [dev.andrewbucknell.com :: err] fatal: The remote end hung up unexpectedly ** [dev.andrewbucknell.com :: out] Initialized empty Git repository in /var/www/dev/guestbook2/shared/cached-copy/.git/ command finished failed: "sh -c 'if [ -d /var/www/dev/guestbook2/shared/cached-copy ]; then cd /var/www/dev/guestbook2/shared/cached-copy && git fetch origin && git reset --hard 369c5e04aaf83ad77efbfba0141001ac90915029 && git clean -d -x -f; else git clone [email protected]:/home/andrew/git/guestbook2.git /var/www/dev/guestbook2/shared/cached-copy && cd /var/www/dev/guestbook2/shared/cached-copy && git checkout -b deploy 369c5e04aaf83ad77efbfba0141001ac90915029; fi'" on dev.andrewbucknell.com andrew@melb-web:~/projects/rails/guestbook2$ The following fragment is from cap -d deploy:migrations Preparing to execute command: "find /var/www/dev/guestbook2/releases/20100305124415/public/images /var/www/dev/guestbook2/releases/20100305124415/public/stylesheets /var/www/dev/guestbook2/releases/20100305124415/public/javascripts -exec touch -t 201003051244.22 {} ';'; true" Execute ([Yes], No, Abort) ? |y| yes * executing `deploy:migrate' * executing "ls -x /var/www/dev/guestbook2/releases" Preparing to execute command: "ls -x /var/www/dev/guestbook2/releases" Execute ([Yes], No, Abort) ? |y| yes /usr/lib/ruby/gems/1.8/gems/capistrano-2.5.17/lib/capistrano/recipes/deploy.rb:55:in `join': can't convert nil into String (TypeError) from /usr/lib/ruby/gems/1.8/gems/capistrano-2.5.17/lib/capistrano/recipes/deploy.rb:55:in `load'

    Read the article

  • Custom listview entry works in JB not in Gingerbread

    - by Andy
    I have a ListFragment with a custom ArrayAdapter where I am overiding getView() to provide a custom View for the list item. private class DirListAdaptor extends ArrayAdapter<DirItem> { @Override public View getView(int position, View convertView, ViewGroup parent) { View aView = convertView; if (aView == null) { LayoutInflater vi = (LayoutInflater) getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); // TODO: can we not access textViewResourceId? aView = vi.inflate(R.layout.dir_list_entry, parent, false); } etc... Here is the dir_list_entry.xml: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="?android:attr/listPreferredItemHeight" android:paddingLeft="?android:attr/listPreferredItemPaddingLeft" android:paddingRight="?android:attr/listPreferredItemPaddingRight"> <ImageView android:id="@+id/dir_list_icon" android:layout_width="wrap_content" android:layout_height="match_parent" android:layout_alignParentTop="true" android:layout_alignParentBottom="true" android:layout_marginRight="6dp" android:src="@drawable/ic_launcher" /> <TextView android:id="@+id/dir_list_details" android:textAppearance="?android:attr/textAppearanceListItem" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_toRightOf="@id/dir_list_icon" android:layout_alignParentBottom="true" android:layout_alignParentRight="true" android:singleLine="true" android:ellipsize="marquee" android:textSize="12sp" android:text="Details" /> <TextView android:id="@+id/dir_list_filename" android:textAppearance="?android:attr/textAppearanceListItem" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_toRightOf="@id/dir_list_icon" android:layout_alignParentRight="true" android:layout_alignParentTop="true" android:layout_above="@id/dir_list_details" android:layout_alignWithParentIfMissing="true" android:gravity="center_vertical" android:textSize="14sp" android:text="Filename"/> </RelativeLayout> The bizarre thing is this works fine on Android 4.1 emulator, but I get the following error on Android 2.3: 10-01 15:07:59.594: ERROR/AndroidRuntime(1003): FATAL EXCEPTION: main android.view.InflateException: Binary XML file line #1: Error inflating class android.widget.RelativeLayout at android.view.LayoutInflater.createView(LayoutInflater.java:518) at com.android.internal.policy.impl.PhoneLayoutInflater.onCreateView(PhoneLayoutInflater.java:56) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:568) at android.view.LayoutInflater.inflate(LayoutInflater.java:386) at android.view.LayoutInflater.inflate(LayoutInflater.java:320) at com.eveps.evepsdroid.ui.PhotoBrowserListFragment$DirListAdaptor.getView(PhotoBrowserListFragment.java:104) at android.widget.AbsListView.obtainView(AbsListView.java:1430) at android.widget.ListView.makeAndAddView(ListView.java:1745) at android.widget.ListView.fillDown(ListView.java:670) at android.widget.ListView.fillFromTop(ListView.java:727) at android.widget.ListView.layoutChildren(ListView.java:1598) at android.widget.AbsListView.onLayout(AbsListView.java:1260) at android.view.View.layout(View.java:7175) at android.widget.FrameLayout.onLayout(FrameLayout.java:338) at android.view.View.layout(View.java:7175) at android.widget.FrameLayout.onLayout(FrameLayout.java:338) at android.view.View.layout(View.java:7175) at android.widget.FrameLayout.onLayout(FrameLayout.java:338) at android.view.View.layout(View.java:7175) at android.widget.FrameLayout.onLayout(FrameLayout.java:338) at android.view.View.layout(View.java:7175) at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1254) at android.widget.LinearLayout.layoutHorizontal(LinearLayout.java:1243) at android.widget.LinearLayout.onLayout(LinearLayout.java:1049) at android.view.View.layout(View.java:7175) at android.widget.FrameLayout.onLayout(FrameLayout.java:338) at android.view.View.layout(View.java:7175) at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1254) at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1130) at android.widget.LinearLayout.onLayout(LinearLayout.java:1047) at android.view.View.layout(View.java:7175) at android.widget.FrameLayout.onLayout(FrameLayout.java:338) at android.view.View.layout(View.java:7175) at android.view.ViewRoot.performTraversals(ViewRoot.java:1140) at android.view.ViewRoot.handleMessage(ViewRoot.java:1859) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:123) at android.app.ActivityThread.main(ActivityThread.java:3683) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:507) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) at dalvik.system.NativeStart.main(Native Method) Caused by: java.lang.reflect.InvocationTargetException at java.lang.reflect.Constructor.constructNative(Native Method) at java.lang.reflect.Constructor.newInstance(Constructor.java:415) at android.view.LayoutInflater.createView(LayoutInflater.java:505) ... 42 more Caused by: java.lang.UnsupportedOperationException: Can't convert to dimension: type=0x2 at android.content.res.TypedArray.getDimensionPixelSize(TypedArray.java:463) at android.view.View.<init>(View.java:1957) at android.view.View.<init>(View.java:1899) at android.view.ViewGroup.<init>(ViewGroup.java:286) at android.widget.RelativeLayout.<init>(RelativeLayout.java:173) ... 45 more I'm using the Android Support library for fragment support obviously. Seems to be a problem inflating the custom list view entry, something to do with a dimension - but why does it work on JellyBean? Has something changed in this area?

    Read the article

  • Calling the same xsl:template for different node names of the same complex type

    - by CraftyFella
    Hi, I'm trying to keep my xsl DRY and as a result I wanted to call the same template for 2 sections of an XML document which happen to be the same complex type (ContactDetails and AltContactDetails). Given the following XML: <?xml version="1.0" encoding="UTF-8"?> <RootNode> <Name>Bob</Name> <ContactDetails> <Address> <Line1>1 High Street</Line1> <Town>TownName</Town> <Postcode>AB1 1CD</Postcode> </Address> <Email>[email protected]</Email> </ContactDetails> <AltContactDetails> <Address> <Line1>3 Market Square</Line1> <Town>TownName</Town> <Postcode>EF2 2GH</Postcode> </Address> <Email>[email protected]</Email> </AltContactDetails> </RootNode> I wrote an XSL Stylesheet as follows: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="/"> <PersonsName> <xsl:value-of select="RootNode/Name"/> </PersonsName> <xsl:call-template name="ContactDetails"> <xsl:with-param name="data"><xsl:value-of select="RootNode/ContactDetails"/></xsl:with-param> <xsl:with-param name="elementName"><xsl:value-of select="'FirstAddress'"/></xsl:with-param> </xsl:call-template> <xsl:call-template name="ContactDetails"> <xsl:with-param name="data"><xsl:value-of select="RootNode/AltContactDetails"/></xsl:with-param> <xsl:with-param name="elementName"><xsl:value-of select="'SecondAddress'"/></xsl:with-param> </xsl:call-template> </xsl:template> <xsl:template name="ContactDetails"> <xsl:param name="data"></xsl:param> <xsl:param name="elementName"></xsl:param> <xsl:element name="{$elementName}"> <FirstLine> <xsl:value-of select="$data/Address/Line1"/> </FirstLine> <Town> <xsl:value-of select="$data/Address/Town"/> </Town> <PostalCode> <xsl:value-of select="$data/Address/Postcode"/> </PostalCode> </xsl:element> </xsl:template> </xsl:stylesheet> When i try to run the style sheet it's complaining to me that I need to: To use a result tree fragment in a path expression, either use exsl:node-set() or specify version 1.1 I don't want to go to version 1.1.. So does anyone know how to get the exsl:node-set() working for the above example? Or if someone knows of a better way to apply the same template to 2 different sections then that would also really help me out? Thanks Dave

    Read the article

  • JSF command button attribute is transferred incorrectly

    - by Oleg Rybak
    I have following code in jsf page, backed by jsf managed bean <h:dataTable value="#{poolBean.pools}" var="item"> <h:column> <f:facet name="header"> <h:outputLabel value="Id"/> </f:facet> <h:outputText value="#{item.id}"/> </h:column> <h:column> <f:facet name="header"> <h:outputLabel value="Start Range"/> </f:facet> <h:inputText value="#{item.startRange}" required="true"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value="End Range"/> </f:facet> <h:inputText value="#{item.endRange}" required="true"/> </h:column> <h:column> <f:facet name="header"> <h:outputText value="Pool type"/> </f:facet> <h:selectOneMenu value="#{item.poolType}" required="true"> <f:selectItems value="#{poolBean.poolTypesMenu}"/> </h:selectOneMenu> </h:column> <h:column> <f:facet name="header"/> <h:commandButton id="ModifyPool" actionListener="#{poolBean.updatePool}" image="img/update.gif" title="Modify Pool"> <f:attribute name="pool" value="#{item}"/> </h:commandButton> </h:column> </h:dataTable> This code fragment is dedicated to editing come collection of items. Each row of the table contains "edit" button that submits changed values of the row to the server. It has the item itself as an attribute. Submit is performed by calling actionListener method in the backing managed bean. This code runs correctly on Glassfish v 2.1 But when the server was updated to Glassfish v 2.1.1, the attribute stopped to be passed correctly. Instead of passing edited item (when we change the values in table row, we are actually changing the underlying object fields), the source item is submitted to server, i.e. the item that was previously given to the page. All the changes that were made on the page are discarded. I tried to update jsf version from 1.2_02 to 1.2_14 (we are using jsf RI), but it had no effect. Perhaps anyone came across the same problem? Any help and suggestions will be appreciated.

    Read the article

  • My google map android app keeps crashing

    - by Manny264
    I have followed about two tutorials from vogella and some other tutorial that looked similar...very similar but to no avail. I load the app on my nexus 7 and it just crashes "Unfortunately MyMapView has stopped working" on launch.This is the manifest: ` <uses-sdk android:minSdkVersion="17" android:targetSdkVersion="17" /> <uses-feature android:glEsVersion="0x00020000" android:required="true"/> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="com.google.android.providers.gsf.permission.READ_GSERVICES"/> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <uses-library android:name="com.google.android.maps" /> <activity android:name="com.macmozart.mymapview.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <meta-data android:name="com.google.android.maps.v2.API_KEY" android:value="AIzaSyBZ1Bt7rjB863Jy-B05zls6k8XZsBGQ6-4" /> </application> ` Followed by my main layout: <fragment android:id="@+id/map" android:layout_width="match_parent" android:layout_height="match_parent" class="com.google.android.gms.maps.MapFragment" /> and finally my java class: package com.macmozart.mymapview; import android.app.Activity; import android.os.Bundle; import com.google.android.gms.maps.CameraUpdateFactory; import com.google.android.gms.maps.GoogleMap; import com.google.android.gms.maps.MapFragment; import com.google.android.gms.maps.model.BitmapDescriptorFactory; import com.google.android.gms.maps.model.LatLng; import com.google.android.gms.maps.model.Marker; import com.google.android.gms.maps.model.MarkerOptions; import com.google.android.maps.*; public class MainActivity extends Activity { static final LatLng HAMBURG = new LatLng(53.558, 9.927); static final LatLng KIEL = new LatLng(53.551, 9.993); private GoogleMap map; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); map = ((MapFragment) getFragmentManager().findFragmentById(R.id.map)) .getMap(); if (map != null) { Marker hamburg = map.addMarker(new MarkerOptions() .position(HAMBURG).title("Hamburg")); Marker kiel = map.addMarker(new MarkerOptions() .position(KIEL) .title("Kiel") .snippet("Kiel is cool") .icon(BitmapDescriptorFactory .fromResource(R.drawable.ic_launcher))); map.moveCamera(CameraUpdateFactory.newLatLngZoom(HAMBURG, 15)); // Zoom in, animating the camera. map.animateCamera(CameraUpdateFactory.zoomTo(10), 2000, null); } } } Any idea what im doing wrong I really need this to work

    Read the article

  • Cross Browser Issue

    - by dazedandconfused
    My background is in WinForms programming and I'm trying to branch out a bit. I'm finding cross-browser issues a frustrating barrier in general, but have a specific one that I just can't seem to work through. I want to display an image and place a semi-transparent bar across the top and bottom. This isn't my ultimate goal, of course, but it demonstrates the problem I'm having ina a relatively short code fragment so let's go with it. The sample code below displays as intended in Chrome, Safari, and Firefox. In IE8, the bar at the bottom doesn't appear at all. I've researched it for hours but just can't seem to come up with the solution. I'm sure this is some dumb rookie mistake, but gotta start somewhere. Code snippet... <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <script type="text/javascript" language="javascript"> </script> <style type="text/css"> .workarea { position: relative; border: 1px solid black; background-color: #ccc; overflow: hidden; cursor: move; -moz-user-focus: normal; -moz-user-select: none; unselectable: on; } .semitransparent { filter: alpha(opacity=70); -moz-opacity: 0.7; -khtml-opacity: 0.7; opacity: 0.7; background-color: Gray; } </style> </head> <body style="width: 800px; height: 600px;"> <div id="workArea" class="workarea" style="width: 800px; height: 350px; left: 100px; top: 50px; background-color: White; border: 1px solid black;"> <img alt="" src="images/TestImage.jpg" style="left: 0px; top: 0px; border: none; z-index: 1;" /> <div id="topBar" class="semitransparent" style="position: absolute;width: 800px; height: 75px; left: 0px; top: 0px; min-height: 75px; border: none; z-index: 2;" /> <div id="bottomBar" class="semitransparent" style="position: absolute; width: 800px; height: 75px; left: 0px; top: 275px; min-height: 75px; border: none; z-index: 2;" /> </div> </body> </html>

    Read the article

  • maven scm plugin deleting output folder in every execution

    - by Udo Fholl
    Hi all, I need to download from 2 different svn locations to the same output directory. So i configured 2 different executions. But every time it executes a checkout deletes the output directory so it also deletes the already downloaded projects. Here is a sample of my pom.xml: <profiles> <profile> <id>checkout</id> <activation> <property> <name>checkout</name> <value>true</value> </property> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-scm-plugin</artifactId> <version>1.3</version> <configuration> <username>${svn.username}</username> <password>${svn.pass}</password> <checkoutDirectory>${path}</checkoutDirectory> <skipCheckoutIfExists /> </configuration> <executions> <execution> <id>checkout_a</id> <configuration> <connectionUrl>scm:svn:https://host_n/folder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> <execution> <id>checkout_b</id> <configuration> <connectionUrl>scm:svn:https://host_l/anotherfolder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> Is there any way to prevent the executions to delete the folder ${path} ? Thank you. PS: I cant format the pom.xml fragment correctly, sorry!

    Read the article

  • opengl 3d texture issue

    - by user1478217
    Hi i'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code: glGenTextures(1, &tex3D); glBindTexture(GL_TEXTURE_3D, tex3D); unsigned int colours[8]; colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue); colours[1] = Colour::AsBytes<unsigned int>(Colour::Red); colours[2] = Colour::AsBytes<unsigned int>(Colour::Green); colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta); colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan); colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow); colours[6] = Colour::AsBytes<unsigned int>(Colour::White); colours[7] = Colour::AsBytes<unsigned int>(Colour::Black); glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours); The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so: glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_3D, tex3D); And render with the following code: shaders["DVR"]->Use(); shaders["DVR"]->Uniforms["volTex"].SetValue(1); shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one)); shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix()); QuadDrawer::DrawQuads(8); I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this: #version 330 layout(location = 0) in vec2 position; layout(location = 1) in vec2 texCoord; uniform sampler3D volTex; ivec3 size = ivec3(2, 2, 2); uniform mat4 World; uniform mat4 viewProj; smooth out vec4 colour; void main() { vec3 texCoord3D; int num = gl_InstanceID; texCoord3D.x = num % size.x; texCoord3D.y = (num / size.x) % size.y; texCoord3D.z = (num / (size.x * size.y)); texCoord3D /= size; texCoord3D *= 2.0; texCoord3D -= 1.0; colour = texture(volTex, texCoord3D); //colour = vec4(texCoord3D, 1.0); gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05); } uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply: #version 330 smooth in vec4 colour; out vec4 outColour; void main() { outColour = colour; } So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture? [EDIT] Figured it out but can't self answer (new user): As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.

    Read the article

  • jQuery tabs error when used with jTemplates

    - by tessa
    I am using jTemplates to format data returned from a json query. I am trying to transform the div with the id "fundingDialogTabs" into jQuery tabs after the template is processed. It renders the tab buttons, but both fragment1 and fragment2 divs are showing at the same time. I get the error "jQuery UI Tabs: Mismatching fragment identifier" when clicking on the fragment2 tab. I tested the jQuery tabs code outside of the template and it works fine. Here is the template (saved in .tpl file). {#template MAIN} <div style="width:500px"> <table border="0" cellpadding="0" cellspacing="0" id="fundingDialogTitle"> <tr> <td class="fundingDialogTitle">Funding Breakout</td> <td style="text-align:right"><img src="../../images/fscaClose.gif" onclick="CloseFundingDialog()" style="cursor:hand; width:25px; height:25px;"></td> </tr> </table> </div> <div style="padding:10px 10px 10px 10px; width:500px"> <div id="fundingDialogTabs"> <ul> <li><a href="#fragment1"><span>Source</span></a></li> <li><a href="#fragment2"><span>Line Item</span></a></li> </ul> <div id="fragment1"> <table border="0" cellpadding="0" cellspacing="0" id="fundingDialog"> <tr> <th>Funding Source</th> <th>Amount</th> </tr> {#foreach $T.d as fundingList} {#include ROW root=$T.fundingList} {#/for} </table> </div> <div id="fragment2"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. </div> </div> </div> {#/template MAIN} {#template ROW} <tr> <td>{$T.SourceName}</td> <td>{$T.Amount}</td> </tr> {#/template ROW} Here are the json and processTemplate methods: function GetFundingDialog(id) { $.ajax({ type: "POST", url: "../../WebService/Workplan.asmx/GetFundingList", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { ApplyTemplate(msg, id); }, error: function(result) { ShowError(result.responseText); } }); } function ApplyTemplate(msg, id) { var fundingDialog = $("div[id='divFundingList']"); if (fundingDialog.length > 0) { fundingDialog.setTemplateURL('../../usercontrols/Workplan/FundingList.tpl'); fundingDialog.processTemplate(msg); fundingDialog[0].style.display = "block"; var src = $("img[id='openFundingList_"+id+"']"); if (src.length > 0) { var srcPosition = findPos(src[0]); fundingDialog[0].style.top = parseInt(srcPosition[1] + 25); } } $("#fundingDialogTabs").tabs(); }

    Read the article

  • STL find performs bettern than hand-crafter loop

    - by dusha
    Hello all, I have some question. Given the following C++ code fragment: #include <boost/progress.hpp> #include <vector> #include <algorithm> #include <numeric> #include <iostream> struct incrementor { incrementor() : curr_() {} unsigned int operator()() { return curr_++; } private: unsigned int curr_; }; template<class Vec> char const* value_found(Vec const& v, typename Vec::const_iterator i) { return i==v.end() ? "no" : "yes"; } template<class Vec> typename Vec::const_iterator find1(Vec const& v, typename Vec::value_type val) { return find(v.begin(), v.end(), val); } template<class Vec> typename Vec::const_iterator find2(Vec const& v, typename Vec::value_type val) { for(typename Vec::const_iterator i=v.begin(), end=v.end(); i<end; ++i) if(*i==val) return i; return v.end(); } int main() { using namespace std; typedef vector<unsigned int>::const_iterator iter; vector<unsigned int> vec; vec.reserve(10000000); boost::progress_timer pt; generate_n(back_inserter(vec), vec.capacity(), incrementor()); //added this line, to avoid any doubts, that compiler is able to // guess the data is sorted random_shuffle(vec.begin(), vec.end()); cout << "value generation required: " << pt.elapsed() << endl; double d; pt.restart(); iter found=find1(vec, vec.capacity()); d=pt.elapsed(); cout << "first search required: " << d << endl; cout << "first search found value: " << value_found(vec, found)<< endl; pt.restart(); found=find2(vec, vec.capacity()); d=pt.elapsed(); cout << "second search required: " << d << endl; cout << "second search found value: " << value_found(vec, found)<< endl; return 0; } On my machine (Intel i7, Windows Vista) STL find (call via find1) runs about 10 times faster than the hand-crafted loop (call via find2). I first thought that Visual C++ performs some kind of vectorization (may be I am mistaken here), but as far as I can see assembly does not look the way it uses vectorization. Why is STL loop faster? Hand-crafted loop is identical to the loop from the STL-find body. I was asked to post program's output. Without shuffle: value generation required: 0.078 first search required: 0.008 first search found value: no second search required: 0.098 second search found value: no With shuffle (caching effects): value generation required: 1.454 first search required: 0.009 first search found value: no second search required: 0.044 second search found value: no Many thanks, dusha. P.S. I return the iterator and write out the result (found or not), because I would like to prevent compiler optimization, that it thinks the loop is not required at all. The searched value is obviously not in the vector.

    Read the article

  • Linq To Xml problems using XElement's method Elements(XName)

    - by Dorian McHensie
    Hello everyone. I have a problem using Linq To Xml. A simple code. I have this XML: <?xml version="1.0" encoding="utf-8" ?> <data xmlns="http://www.xxx.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.xxx.com/directory file.xsd"> <contact> <name>aaa</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> <contact> <name>sss</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> <contact> <name>bbb</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> <contact> <name>ccc</name> <email>[email protected]</email> <birthdate>2002-09-22</birthdate> <telephone>000:000000</telephone> <description>Description for this contact</description> </contact> I want to get every contact mapping it on an object Contact. To do this I use this fragment of code: XDocument XDoc = XDocument.Load(System.Web.HttpRuntime.AppDomainAppPath + this.filesource); XElement XRoot = XDoc.Root; //XElement XEl = XElement.Load(this.filesource); var results = from e in XRoot.Elements("contact") select new Contact((string)e.Element("name"), (string)e.Element("email"), "1-1-1", null, null); List<Contact> cntcts = new List<Contact>(); foreach (Contact cntct in results) { cntcts.Add(cntct); } Contact[] c = cntcts.ToArray(); // Encapsulating element Elements<Contact> final = new Elements<Contact>(c); Ok just don't mind that all: focus on this: When I get the root node, it is all right, I get it correctly. When I use the select directive I try to get every node saying: from e in XRoot.Elements("contact") OK here's the problem: if I use: from e in XRoot.Elements() I get all contact nodes, but if I use: from e in XRoot.Elements("contact") I GET NOTHING: Empty SET. OK you tell me: Use the other one: OK I DO SO, let's use: from e in XRoot.Elements(), I get all nodes anyway, THAT's RIGHT BUT HERE COMES THE OTHER PROBLEM: When Saying: select new Contact((string)e.Element("name"), (string)e.Element("email"), "1-1-1", null, null); I Try to access <name>, <email>... I HAVE TO USE .Element("name") AND IT DOES NOT WORK TOO!!!!!!!!WHAT THE HELL IS THIS????????????? IT SEEMS THAT I DOES NOT MATCH THE NAME I PASS But how is it possible. I know that Elements() function takes, overloaded, one argument that is an XName which is mapped onto a string. Please consider that the code I wrote come from an example, It should work. Please somebody help me. THANKS IN ADVANCE

    Read the article

  • Why some pictures are are crooked aftes using my function?

    - by Miko Kronn
    struct BitmapDataAccessor { private readonly byte[] data; private readonly int[] rowStarts; public readonly int Height; public readonly int Width; public BitmapDataAccessor(byte[] data, int width, int height) { this.data = data; this.Height = height; this.Width = width; rowStarts = new int[height]; for (int y = 0; y < Height; y++) rowStarts[y] = y * width; } public byte this[int x, int y, int color] // Maybe use an enum with Red = 0, Green = 1, and Blue = 2 members? { get { return data[(rowStarts[y] + x) * 3 + color]; } set { data[(rowStarts[y] + x) * 3 + color] = value; } } public byte[] Data { get { return data; } } } public static byte[, ,] Bitmap2Byte(Bitmap obraz) { int h = obraz.Height; int w = obraz.Width; byte[, ,] wynik = new byte[w, h, 3]; BitmapData bd = obraz.LockBits(new Rectangle(0, 0, w, h), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); int bytes = Math.Abs(bd.Stride) * h; byte[] rgbValues = new byte[bytes]; IntPtr ptr = bd.Scan0; System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes); BitmapDataAccessor bda = new BitmapDataAccessor(rgbValues, w, h); for (int i = 0; i < h; i++) { for (int j = 0; j < w; j++) { wynik[j, i, 0] = bda[j, i, 2]; wynik[j, i, 1] = bda[j, i, 1]; wynik[j, i, 2] = bda[j, i, 0]; } } obraz.UnlockBits(bd); return wynik; } public static Bitmap Byte2Bitmap(byte[, ,] tablica) { if (tablica.GetLength(2) != 3) { throw new NieprawidlowyWymiarTablicyException(); } int w = tablica.GetLength(0); int h = tablica.GetLength(1); Bitmap obraz = new Bitmap(w, h, PixelFormat.Format24bppRgb); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color kol = Color.FromArgb(tablica[i, j, 0], tablica[i, j, 1], tablica[i, j, 2]); obraz.SetPixel(i, j, kol); } } return obraz; } Now, if I do: private void btnLoad_Click(object sender, EventArgs e) { if (dgOpenFile.ShowDialog() == DialogResult.OK) { try { Bitmap img = new Bitmap(dgOpenFile.FileName); byte[, ,] tab = Grafika.Bitmap2Byte(img); picture.Image = Grafika.Byte2Bitmap(tab); picture.Size = img.Size; } catch (Exception ex) { MessageBox.Show(ex.Message); } } } Most of pictures are handled correctly butsome not. Example of picture that doesn't work: It produce following result (this is only fragment of picture) : Why is that?

    Read the article

  • How can I clear content without getting the dreaded "stop running this script?" dialog?

    - by Cheeso
    I have a div, that holds a div. like this: <div id='reportHolder' class='column'> <div id='report'> </div> </div> Within the inner div, I add a bunch (7-12) of pairs of a and div elements, like this: <h4><a>Heading1</a></h4> <div> ...content here....</div> The total size of the content, is maybe 200k. Each div just contains a fragment of HTML. Within it, there are numerous <span> elements, containing other html elements, and they nest, to maybe 5-8 levels deep. Nothing really extraordinary, I don't think. After I add all the content, I then create an accordion. like this: $('#report').accordion({collapsible:true, active:false}); This all works fine. The problem is, when I try to clear or remove the report div, it takes a looooooong time, and I get 3 or 4 popups asking "Do you want to stop running this script?" I have tried several ways: option 1: $('#report').accordion('destroy'); $('#report').remove(); $("#reportHolder").html("<div id='report'> </div>"); option 2: $('#report').accordion('destroy'); $('#report').html(''); $("#reportHolder").html("<div id='report'> </div>"); option 3: $('#report').accordion('destroy'); $("#reportHolder").html("<div id='report'> </div>"); after getting a suggestion in the comment, I also tried: option 4: $('#report').accordion('destroy'); $('#report').empty(); $("#reportHolder").html("<div id='report'> </div>"); No matter what, it hangs for a long while. The call to accordion('destroy') seems to not be the source of the delay. It's the erasure of the html content within the report div. This is jQuery 1.3.2. EDIT - fixed code typo. ps: this happens on FF3.5 as well as IE8 . Questions: What is taking so long? How can I remove content more quickly? Addendum I broke into the debugger in FF, during "option 4", and the stacktrace I see is: data() trigger() triggerHandler() add() each() each() add() empty() each() each() (?)() // <<-- this is the call to empty() ResetUi() // <<-- my code onclick I don't understand why add() is in the stack. I am removing content, not adding it. I'm afraid that in the context of the remove (all), jQuery does something naive. Like it grabs the html content, does the text replace to remove one html element, then calls .add() to put back what remains. Is there a way to tell jQuery to NOT propagate events when removing HTML content from the dom?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >