Search Results

Search found 23967 results on 959 pages for 'multiple languages'.

Page 905/959 | < Previous Page | 901 902 903 904 905 906 907 908 909 910 911 912  | Next Page >

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Collaborative Whiteboard using WebSocket in GlassFish 4 - Text/JSON and Binary/ArrayBuffer Data Transfer (TOTD #189)

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) as its integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket One of the typical usecase for WebSocket is online collaborative games. This Tip Of The Day (TOTD) explains a sample that can be used to build such games easily. The application is a collaborative whiteboard where different shapes can be drawn in multiple colors. The shapes drawn on one browser are automatically drawn on all other peer browsers that are connected to the same endpoint. The shape, color, and coordinates of the image are transfered using a JSON structure. A browser may opt-out of sharing the figures. Alternatively any browser can send a snapshot of their existing whiteboard to all other browsers. Take a look at this video to understand how the application work and the underlying code. The complete sample code can be downloaded here. The code behind the application is also explained below. The web page (index.jsp) has a HTML5 Canvas as shown: <canvas id="myCanvas" width="150" height="150" style="border:1px solid #000000;"></canvas> And some radio buttons to choose the color and shape. By default, the shape, color, and coordinates of any figure drawn on the canvas are put in a JSON structure and sent as a message to the WebSocket endpoint. The JSON structure looks like: { "shape": "square", "color": "#FF0000", "coords": { "x": 31.59999942779541, "y": 49.91999053955078 }} The endpoint definition looks like: @WebSocketEndpoint(value = "websocket",encoders = {FigureDecoderEncoder.class},decoders = {FigureDecoderEncoder.class})public class Whiteboard { As you can see, the endpoint has decoder and encoder registered that decodes JSON to a Figure (a POJO class) and vice versa respectively. The decode method looks like: public Figure decode(String string) throws DecodeException { try { JSONObject jsonObject = new JSONObject(string); return new Figure(jsonObject); } catch (JSONException ex) { throw new DecodeException("Error parsing JSON", ex.getMessage(), ex.fillInStackTrace()); }} And the encode method looks like: public String encode(Figure figure) throws EncodeException { return figure.getJson().toString();} FigureDecoderEncoder implements both decoder and encoder functionality but thats purely for convenience. But the recommended design pattern is to keep them in separate classes. In certain cases, you may even need only one of them. On the client-side, the Canvas is initialized as: var canvas = document.getElementById("myCanvas");var context = canvas.getContext("2d");canvas.addEventListener("click", defineImage, false); The defineImage method constructs the JSON structure as shown above and sends it to the endpoint using websocket.send(). An instant snapshot of the canvas is sent using binary transfer with WebSocket. The WebSocket is initialized as: var wsUri = "ws://localhost:8080/whiteboard/websocket";var websocket = new WebSocket(wsUri);websocket.binaryType = "arraybuffer"; The important part is to set the binaryType property of WebSocket to arraybuffer. This ensures that any binary transfers using WebSocket are done using ArrayBuffer as the default type seem to be blob. The actual binary data transfer is done using the following: var image = context.getImageData(0, 0, canvas.width, canvas.height);var buffer = new ArrayBuffer(image.data.length);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = image.data[i];}websocket.send(bytes); This comprehensive sample shows the following features of JSR 356 API: Annotation-driven endpoints Send/receive text and binary payload in WebSocket Encoders/decoders for custom text payload In addition, it also shows how images can be captured and drawn using HTML5 Canvas in a JSP. How could this be turned in to an online game ? Imagine drawing a Tic-tac-toe board on the canvas with two players playing and others watching. Then you can build access rights and controls within the application itself. Instead of sending a snapshot of the canvas on demand, a new peer joining the game could be automatically transferred the current state as well. Do you want to build this game ? I built a similar game a few years ago. Do somebody want to rewrite the game using WebSocket APIs ? :-) Many thanks to Jitu and Akshay for helping through the WebSocket internals! Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Efficient Way to Draw Grids in XNA

    - by sm81095
    So I am working on a game right now, using Monogame as my framework, and it has come time to render my world. My world is made up of a grid (think Terraria but top-down instead of from the side), and it has multiple layers of grids in a single world. Knowing how inefficient it is to call SpriteBatch.Draw() a lot of times, I tried to implement a system where the tile would only be drawn if it wasn't hidden by the layers above it. The problem is, I'm getting worse performance by checking if it's hidden than when I just let everything draw even if it's not visible. So my question is: how to I efficiently check if a tile is hidden to cut down on the draw() calls? Here is my draw code for a single layer, drawing floors, and then the tiles (which act like walls): public void Draw(GameTime gameTime) { int drawAmt = 0; int width = Tile.TILE_DIM; int startX = (int)_parent.XOffset; int startY = (int)_parent.YOffset; //Gets the starting tiles and the dimensions to draw tiles, so only onscreen tiles are drawn, allowing for the drawing of large worlds int tileDrawWidth = ((CIGame.Instance.Graphics.PreferredBackBufferWidth / width) + 4); int tileDrawHeight = ((CIGame.Instance.Graphics.PreferredBackBufferHeight / width) + 4); int tileStartX = (int)MathHelper.Clamp((-startX / width) - 2, 0, this.Width); int tileStartY = (int)MathHelper.Clamp((-startY / width) - 2, 0, this.Height); #region Draw Floors and Tiles CIGame.Instance.GraphicsDevice.SetRenderTarget(_worldTarget); CIGame.Instance.GraphicsDevice.Clear(Color.Black); CIGame.Instance.SpriteBatch.Begin(); //Draw floors for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layer above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } //Only draw if visible under the tile above it if (visible && this.GetTileAt(x, y).Opacity < 1.0f) { Texture2D tex = WorldTextureManager.GetFloorTexture((Floor)_floors[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Floor)_floors[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Floor)_floors[x, y]).Opacity); drawAmt++; } } } //Draw tiles for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layers above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } if (visible) { Texture2D tex = WorldTextureManager.GetTileTexture((Tile)_tiles[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Tile)_tiles[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Tile)_tiles[x, y]).Opacity); drawAmt++; } } } CIGame.Instance.SpriteBatch.End(); Console.WriteLine(drawAmt); CIGame.Instance.GraphicsDevice.SetRenderTarget(null); //TODO: Change to new rendertarget instead of null #endregion } So I was wondering if this is an efficient way, but I'm going about it wrongly, or if there is a different, more efficient way to check if the tiles are hidden. EDIT: For example of how much it affects performance: using a world with three layers, allowing everything to draw no matter what gives me 60FPS, but checking if its visible with all of the layers above it gives me only 20FPS, while checking only the layer immediately above it gives me a fluctuating FPS between 30 and 40FPS.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Mounting ddrescue image after recovery (in over my head)

    - by BorgDomination
    I'm having problems mounting the recovery image. I've tried to mount the image multiple ways. quark@DS9 ~ $ sudo mount -t ext4 /media/jump1/1recover/sdb1.img /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so quark@DS9 ~ $ sudo mount -r -o loop /media/jump1/1recover/sdb1.img recover mount: you must specify the filesystem type quark@DS9 ~ $ sudo mount /media/jump1/1recover/sdb1.img mnt mount: you must specify the filesystem type It doesn't even give me detailed information on the file I just made, nautilus says it's 160gb. quark@DS9 ~ $ file /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img: data quark@DS9 ~ $ mmls /media/jump1/1recover/sdb1.img Cannot determine partition type I'm not sure what I'm doing wrong or if I started this process incorrectly from the beginning. I've outlined what I've done so far below. I'm clueless, I'd appreciate if someone had some input for me. What I have done from the beginning My laptop has two hard drives. One has the dual boot Win7 / Linux Mint system files. Secondary one contained my /home folder. The laptop was jarred and the /home disk was broken. I tried a LiveCD recovery, it failed. Wouldn't even load a Live session with the disk installed. So I turned to ddrescue. quark@DS9 ~ $ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009fc18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 112642047 56320000 7 HPFS/NTFS/exFAT /dev/sda2 138033152 312580095 87273472 83 Linux /dev/sda3 112644094 138033151 12694529 5 Extended /dev/sda5 112644096 132173823 9764864 83 Linux /dev/sda6 132175872 138033151 2928640 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002a8ea Device Boot Start End Blocks Id System /dev/sdb1 * 63 312576704 156288321 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed6d054b Device Boot Start End Blocks Id System /dev/sdc1 63 1953520064 976760001 7 HPFS/NTFS/exFAT sda - 160g internal, holds all system files and all computer functions. sdb - 160g internal, BROKEN, contains about 140g of data I'd like to recover. sdc - 1T external, contains recovery image. Only place that has space to do all this. From this site, https://apps.education.ucsb.edu/wiki/Ddrescue I used this script to create an image of the broken hard drive. I changed the destination to the external USB drive. #!/bin/sh prt=sdb1 src=/dev/$prt dst=/media/jump1/1recover/$prt.img log=$dst.log sudo time ddrescue --no-split $src $dst $log sudo time ddrescue --direct --max-retries=3 $src $dst $log sudo time ddrescue --direct --retrim --max-retries=3 $src $dst $log Everything looked like it came off without a hitch: quark@DS9 ~ $ sudo bash recover1 Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 0 B, errsize: 0 B, errors: 0 Current status rescued: 160039 MB, errsize: 4096 B, current rate: 35588 B/s ipos: 3584 B, errors: 1, average rate: 22859 kB/s opos: 3584 B, time from last successful read: 0 s Finished 12.78user 1060.42system 1:56:41elapsed 15%CPU (0avgtext+0avgdata 4944maxresident)k 312580958inputs+0outputs (1major+601minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 4096 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 13 B/s opos: 1536 B, time from last successful read: 1.3 m Finished 0.00user 0.00system 3:43.95elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 238inputs+0outputs (3major+374minor)pagefaults 0swaps Press Ctrl-C to interrupt Initial status (read from logfile) rescued: 160039 MB, errsize: 1024 B, errors: 1 Current status rescued: 160039 MB, errsize: 1024 B, current rate: 0 B/s ipos: 1536 B, errors: 1, average rate: 0 B/s opos: 1536 B, time from last successful read: 3.7 m Finished 0.00user 0.00system 3:43.56elapsed 0%CPU (0avgtext+0avgdata 4944maxresident)k 8inputs+0outputs (0major+376minor)pagefaults 0swaps It looks like, from where I'm standing it worked perfectly. Here's the log: # Rescue Logfile. Created by GNU ddrescue version 1.14 # Command line: ddrescue --direct --retrim --max-retries=3 /dev/sdb1 /media/jump1/1recover/sdb1.img /media/jump1/1recover/sdb1.img.log # current_pos current_status 0x00000600 + # pos size status 0x00000000 0x00000400 + 0x00000400 0x00000400 - 0x00000800 0x254314FC00 + I'm not sure how to proceed. Does this mean all of my data is lost???????? Appreciate ANY input!

    Read the article

  • Copies of GameScene created when called additional times

    - by Orin MacGregor
    I have a game with a level select managed by a SceneManager, which basically just uses ReplaceScene. The first time I load a level everything works fine. On subsequent calls, for example: completing the level and continuing to the next, things blow up. The level loads fine, but when I try to pan the map or try to move the player the game crashes. Debugging through I found that there are multiple occurrences of self and related children like player and mapLayer. As a test, I put this code in my ccTouchesBegan: NSLog(@"test %i", [self retainCount]); The first time a level is loaded, it gives: test 2 The second time I load a level it gives: test 2 test 1 as in it spits out both values by looping through twice, not just appending an output to the last. It continues with this pattern for each subsequent load. So the third time will give 2 1 1. Particular code that causes the game to crash involve calling _tileMap.tileSize because there is a second GameScene with a tileMap that was supposedly destroyed, so it has tileSize and mapSize of 0. I noticed dealloc doesn't really ever get called, so I tried to manage some things with -(void) onExit -(void) onExit { [self unscheduleAllSelectors]; [_player stopAllActions]; //stop any animations just in case. normally handled in ccTouchesEnded [self removeAllChildrenWithCleanup:YES]; } I never replace the GameScene while I'm in a GameScene; if the level is completed it goes to a GameOver scene, or I use a back button that goes to the LevelSelect scene. This is [the relevant parts of] my init, in case something like the adding of children matters: -(id) init { _mapLayer = [CCLayer node]; //load data for level GameData *gameData = [GameDataParser loadData]; int selectedChapter = gameData.selectedChapter; int selectedLevel = gameData.selectedLevel; Levels *chapterLevels = [LevelParser loadLevelsForChapter:selectedChapter]; //loop until we get selected level, then do stuff for (Level *level in chapterLevels.levels) { if (level.number == selectedLevel) { //load the level map _tileMap = [CCTMXTiledMap tiledMapWithTMXFile:level.file]; } } _background = [_tileMap layerNamed:@"Background"]; _foreground = [_tileMap layerNamed:@"Foreground"]; _meta = [_tileMap layerNamed:@"Meta"]; _meta.visible = NO; //initialize Spawn Point object and place player there CCTMXObjectGroup *objects = [_tileMap objectGroupNamed:@"Objects"]; NSAssert(objects != nil, @"'Objects' object group not found"); NSMutableDictionary *spawnPoint = [objects objectNamed:@"SpawnPoint"]; NSAssert(spawnPoint != nil, @"SpawnPoint object not found"); int x = [[spawnPoint valueForKey:@"x"] intValue] / retinaScaling; int y = [[spawnPoint valueForKey:@"y"] intValue] / retinaScaling; //setup animations [[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:@"MouseRightAnim_24x21.plist"]; CCSpriteBatchNode *spriteSheet = [CCSpriteBatchNode batchNodeWithFile:@"MouseRightAnim_24x21.png"]; [_mapLayer addChild:spriteSheet z:1]; NSMutableArray *rightAnimFrames = [NSMutableArray array]; for(int i = 1; i <= 3; ++i) { [rightAnimFrames addObject: [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName: [NSString stringWithFormat:@"MouseRight%d_24x21.png", i]]]; } CCAnimation *rightAnim = [CCAnimation animationWithSpriteFrames:rightAnimFrames delay:0.1f]; self.player = [CCSprite spriteWithSpriteFrameName:@"MouseRight2_24x21.png"]; _player.position = ccp(x, y); self.rightAction = [CCRepeatForever actionWithAction:[CCAnimate actionWithAnimation:rightAnim]]; rightAnim.restoreOriginalFrame = NO; [spriteSheet addChild:_player]; //get map size in pixels mapHeight = _tileMap.contentSize.height; mapWidth = _tileMap.contentSize.width; //setup defaults //this value works well for the calculation later, trial and error really distance = 150; lastGoodDistance = 150; mapScale = 1; [self setViewpointCenter:_player.position]; [_mapLayer addChild:_tileMap]; [self addChild:_mapLayer z:-1]; self.isTouchEnabled = YES; } return self; } And here's the SceneManager code for replacing scenes: +(void) goGameScene { CCLayer *gameLayer = [GameScene node]; [SceneManager go:gameLayer:[GameHUD node]]; } //this is what every call looks like besides the GameScene one above +(void) goLevelSelect { [SceneManager go:[LevelSelect node]:nil]; } +(void) go:(CCLayer *)layer: (CCLayer *)hudLayer { CCDirector *director = [CCDirector sharedDirector]; CCScene *newScene = [SceneManager wrap:layer:hudLayer]; if ([director runningScene]) { [director replaceScene:newScene]; } else { [director runWithScene:newScene]; } } +(CCScene *) wrap:(CCLayer *)layer: (CCLayer *)hudLayer { CCScene *newScene = [CCScene node]; [newScene addChild: layer]; if (hudLayer != nil) { [newScene addChild: hudLayer z:1]; } return newScene; } Any ideas why I'm getting these fatal artifacts? I'm hoping this isn't considered too localized since it basically combines 3 tutorials that anyone could end up following. (Ray Wenderlich Animations, Tim Roadley Scene Manager, Pan and Zoom with Tiled Maps.

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • Using Subjects to Deploy Queries Dynamically

    - by Roman Schindlauer
    In the previous blog posting, we showed how to construct and deploy query fragments to a StreamInsight server, and how to re-use them later. In today’s posting we’ll integrate this pattern into a method of dynamically composing a new query with an existing one. The construct that enables this scenario in StreamInsight V2.1 is a Subject. A Subject lets me create a junction element in an existing query that I can tap into while the query is running. To set this up as an end-to-end example, let’s first define a stream simulator as our data source: var generator = myApp.DefineObservable(     (TimeSpan t) => Observable.Interval(t).Select(_ => new SourcePayload())); This ‘generator’ produces a new instance of SourcePayload with a period of t (system time) as an IObservable. SourcePayload happens to have a property of type double as its payload data. Let’s also define a sink for our example—an IObserver of double values that writes to the console: var console = myApp.DefineObserver(     (string label) => Observer.Create<double>(e => Console.WriteLine("{0}: {1}", label, e)))     .Deploy("ConsoleSink"); The observer takes a string as parameter which is used as a label on the console, so that we can distinguish the output of different sink instances. Note that we also deploy this observer, so that we can retrieve it later from the server from a different process. Remember how we defined the aggregation as an IQStreamable function in the previous article? We will use that as well: var avg = myApp     .DefineStreamable((IQStreamable<SourcePayload> s, TimeSpan w) =>         from win in s.TumblingWindow(w)         select win.Avg(e => e.Value))     .Deploy("AverageQuery"); Then we define the Subject, which acts as an observable sequence as well as an observer. Thus, we can feed a single source into the Subject and have multiple consumers—that can come and go at runtime—on the other side: var subject = myApp.CreateSubject("Subject", () => new Subject<SourcePayload>()); Subject are always deployed automatically. Their name is used to retrieve them from a (potentially) different process (see below). Note that the Subject as we defined it here doesn’t know anything about temporal streams. It is merely a sequence of SourcePayloads, without any notion of StreamInsight point events or CTIs. So in order to compose a temporal query on top of the Subject, we need to 'promote' the sequence of SourcePayloads into an IQStreamable of point events, including CTIs: var stream = subject.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); In a later posting we will show how to use Subjects that have more awareness of time and can be used as a junction between QStreamables instead of IQbservables. Having turned the Subject into a temporal stream, we can now define the aggregate on this stream. We will use the IQStreamable entity avg that we defined above: var longAverages = avg(stream, TimeSpan.FromSeconds(5)); In order to run the query, we need to bind it to a sink, and bind the subject to the source: var standardQuery = longAverages     .Bind(console("5sec average"))     .With(generator(TimeSpan.FromMilliseconds(300)).Bind(subject)); Lastly, we start the process: standardQuery.Run("StandardProcess"); Now we have a simple query running end-to-end, producing results. What follows next is the crucial part of tapping into the Subject and adding another query that runs in parallel, using the same query definition (the “AverageQuery”) but with a different window length. We are assuming that we connected to the same StreamInsight server from a different process or even client, and thus have to retrieve the previously deployed entities through their names: // simulate the addition of a 'fast' query from a separate server connection, // by retrieving the aggregation query fragment // (instead of simply using the 'avg' object) var averageQuery = myApp     .GetStreamable<IQStreamable<SourcePayload>, TimeSpan, double>("AverageQuery"); // retrieve the input sequence as a subject var inputSequence = myApp     .GetSubject<SourcePayload, SourcePayload>("Subject"); // retrieve the registered sink var sink = myApp.GetObserver<string, double>("ConsoleSink"); // turn the sequence into a temporal stream var stream2 = inputSequence.ToPointStreamable(     e => PointEvent.CreateInsert<SourcePayload>(e.Timestamp, e),     AdvanceTimeSettings.StrictlyIncreasingStartTime); // apply the query, now with a different window length var shortAverages = averageQuery(stream2, TimeSpan.FromSeconds(1)); // bind new sink to query and run it var fastQuery = shortAverages     .Bind(sink("1sec average"))     .Run("FastProcess"); The attached solution demonstrates the sample end-to-end. Regards, The StreamInsight Team

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • Why is my animation getting aborted?

    - by Homer_Simpson
    I have a class named Animation which handles my animations. The animation class can be called from multiple other classes. For example, the class Player.cs can call the animation class like this: Animation Playeranimation; Playeranimation = new Animation(TimeSpan.FromSeconds(2.5f), 80, 40, Animation.Sequences.forwards, 0, 5, false, true); //updating the animation public void Update(GameTime gametime) { Playeranimation.Update(gametime); } //drawing the animation public void Draw(SpriteBatch batch) { playeranimation.Draw(batch, PlayerAnimationSpritesheet, PosX, PosY, 0, SpriteEffects.None); } The class Lion.cs can call the animation class with the same code, only the animation parameters are changing because it's another animation that should be played: Animation Lionanimation; Lionanimation = new Animation(TimeSpan.FromSeconds(2.5f), 100, 60, Animation.Sequences.forwards, 0, 8, false, true); Other classes can call the animation class with the same code like the Player class. But sometimes I have some trouble with the animations. If an animation is running and then shortly afterwards another class calls the animation class too, the second animation starts but the first animation is getting aborted. In this case, the first animation couldn't run until it's end because another class started a new instance of the animation class. Why is an animation sometimes getting aborted when another animation starts? How can I solve this problem? My animation class: public class Animation { private int _animIndex, framewidth, frameheight, start, end; private TimeSpan PassedTime; private List<Rectangle> SourceRects = new List<Rectangle>(); private TimeSpan Duration; private Sequences Sequence; public bool Remove; private bool DeleteAfterOneIteration; public enum Sequences { forwards, backwards, forwards_backwards, backwards_forwards } private void forwards() { for (int i = start; i < end; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); } private void backwards() { for (int i = start; i < end; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); } private void forwards_backwards() { for (int i = start; i < end - 1; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); for (int i = start; i < end; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); } private void backwards_forwards() { for (int i = start; i < end - 1; i++) SourceRects.Add(new Rectangle((end - 1 - i) * framewidth, 0, framewidth, frameheight)); for (int i = start; i < end; i++) SourceRects.Add(new Rectangle(i * framewidth, 0, framewidth, frameheight)); } public Animation(TimeSpan duration, int frame_width, int frame_height, Sequences sequences, int start_interval, int end_interval, bool remove, bool deleteafteroneiteration) { Remove = remove; DeleteAfterOneIteration = deleteafteroneiteration; framewidth = frame_width; frameheight = frame_height; start = start_interval; end = end_interval; switch (sequences) { case Sequences.forwards: { forwards(); break; } case Sequences.backwards: { backwards(); break; } case Sequences.forwards_backwards: { forwards_backwards(); break; } case Sequences.backwards_forwards: { backwards_forwards(); break; } } Duration = duration; Sequence = sequences; } public void Update(GameTime dt) { PassedTime += dt.ElapsedGameTime; if (PassedTime > Duration) { PassedTime -= Duration; } var percent = PassedTime.TotalSeconds / Duration.TotalSeconds; if (DeleteAfterOneIteration == true) { if (_animIndex >= SourceRects.Count) Remove = true; _animIndex = (int)Math.Round(percent * (SourceRects.Count)); } else { _animIndex = (int)Math.Round(percent * (SourceRects.Count - 1)); } } public void Draw(SpriteBatch batch, Texture2D Textures, float PositionX, float PositionY, float Rotation, SpriteEffects Flip) { if (DeleteAfterOneIteration == true) { if (_animIndex >= SourceRects.Count) return; } batch.Draw(Textures, new Rectangle((int)PositionX, (int)PositionY, framewidth, frameheight), SourceRects[_animIndex], Color.White, Rotation, new Vector2(framewidth / 2.0f, frameheight / 2.0f), Flip, 0f); } }

    Read the article

  • Web optimization

    - by hmloo
    1. CSS Optimization Organize your CSS code Good CSS organization helps with future maintainability of the site, it helps you and your team member understand the CSS more quickly and jump to specific styles. Structure CSS code For small project, you can break your CSS code in separate blocks according to the structure of the page or page content. for example you can break your CSS document according the content of your web page(e.g. Header, Main Content, Footer) Structure CSS file For large project, you may feel having too much CSS code in one place, so it's the best to structure your CSS into more CSS files, and use a master style sheet to import these style sheets. this solution can not only organize style structure, but also reduce server request./*--------------Master style sheet--------------*/ @import "Reset.css"; @import "Structure.css"; @import "Typography.css"; @import "Forms.css"; Create index for your CSS Another important thing is to create index at the beginning of your CSS file, index can help you quickly understand the whole CSS structure./*---------------------------------------- 1. Header 2. Navigation 3. Main Content 4. Sidebar 5. Footer ------------------------------------------*/ Writing efficient CSS selectors keep in mind that browsers match CSS selectors from right to left and the order of efficiency for selectors 1. id (#myid) 2. class (.myclass) 3. tag (div, h1, p) 4. adjacent sibling (h1 + p) 5. child (ul > li) 6. descendent (li a) 7. universal (*) 8. attribute (a[rel="external"]) 9. pseudo-class and pseudo element (a:hover, li:first) the rightmost selector is called "key selector", so when you write your CSS code, you should choose more efficient key selector. Here are some best practice: Don't tag-qualify Never do this:div#myid div.myclass .myclass#myid IDs are unique, classes are more unique than a tag so they don't need a tag. Doing so makes the selector less efficient. Avoid overqualifying selectors for example#nav a is more efficient thanul#nav li a Don't repeat declarationExample: body {font-size:12px;}h1 {font-size:12px;font-weight:bold;} since h1 is already inherited from body, so you don't need to repeate atrribute. Using 0 instead of 0px Always using #selector { margin: 0; } There’s no need to include the px after 0, removing all those superfluous px can reduce the size of your CSS file. Group declaration Example: h1 { font-size: 16pt; } h1 { color: #fff; } h1 { font-family: Arial, sans-serif; } it’s much better to combine them:h1 { font-size: 16pt; color: #fff; font-family: Arial, sans-serif; } Group selectorsExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; } it would be much better if setup as:h1, h2 { color: #fff; font-family: Arial, sans-serif; } Group attributeExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; font-size: 16pt; } you can set different rules for specific elements after setting a rule for a grouph1, h2 { color: #fff; font-family: Arial, sans-serif; } h2 { font-size: 16pt; } Using Shorthand PropertiesExample: #selector { margin-top: 8px; margin-right: 4px; margin-bottom: 8px; margin-left: 4px; }Better: #selector { margin: 8px 4px 8px 4px; }Best: #selector { margin: 8px 4px; } a good diagram illustrated how shorthand declarations are interpreted depending on how many values are specified for margin and padding property. instead of using:#selector { background-image: url(”logo.png”); background-position: top left; background-repeat: no-repeat; } is used:#selector { background: url(logo.png) no-repeat top left; } 2. Image Optimization Image Optimizer Image Optimizer is a free Visual Studio2010 extension that optimizes PNG, GIF and JPG file sizes without quality loss. It uses SmushIt and PunyPNG for the optimization. Just right click on any folder or images in Solution Explorer and choose optimize images, then it will automatically optimize all PNG, GIF and JPEG files in that folder. CSS Image Sprites CSS Image Sprites are a way to combine a collection of images to a single image, then use CSS background-position property to shift the visible area to show the required image, many images can take a long time to load and generates multiple server requests, so Image Sprite can reduce the number of server requests and improve site performance. You can use many online tools to generate your image sprite and CSS, and you can also try the Sprite and Image Optimization framework released by The ASP.NET team.

    Read the article

  • retrieve data based on date range using mysql ,php [on hold]

    - by preethi
    I am working on WPF where I have two datepickers when I try to retrieve the information on date range it displays only one record on all dates(same record displaying multiple times eg : date chosen from 01/10/2013 - 3/10/2013) where I have 3 different records on each day but my output is the first record displayed 3 times with same date and time. function cpWhitelistStats() { $startDate = $_POST['startDate']; $startDateTime = "$startDate 00:00:00"; $endDate = $_POST['endDate']; $endDateTime = "$endDate 23:59:59"; $cpId = $_POST['id']; $cpName = etCommonCpNameById($cpId); print "<h2 style=\"text-align: center;\">Permitted Vehicle Summary</h2>"; print "<h2 style=\"text-align: center;\">for $cpName</h2>"; $tmpDate = explode("/", $startDate); $startYear = $tmpDate[2]; $startMonth= $tmpDate[1]; $startDay = $tmpDate[0]; $tmpDate = explode("/", $endDate); $endYear = $tmpDate[2]; $endMonth= $tmpDate[1]; $endDay = $tmpDate[0]; $startDateTime = "$startYear-$startMonth-$startDay 00:00:00"; $endDateTime = "$endYear-$endMonth-$endDay 23:59:59"; $custId = $_SESSION['customerID']; $realCustomerId = $_SESSION['realCustomerId']; $maxVal = 0; if ($custId != "") { $conn = &newEtConn($custId); // Get the whitelist plates $staticWhitelistArray = etCommonMkWhitelist($conn, $cpId); array_shift($staticWhitelistArray); $startLoopDate = strtotime($startDateTime); $endLoopDate = strtotime($endDateTime); $oneDay = 60 * 60 * 24; // Get the entries $plateList = array_keys($staticWhitelistArray); $plate_lookup = implode('","', $plateList); $sql = "SELECT plate, entry_datetime, exit_datetime FROM stats WHERE plate IN (\"$plate_lookup\") AND entry_datetime > \"$startDateTime\" AND entry_datetime < \"$endDateTime\" AND carpark_id=\"$cpId\" "; $result = $conn->Execute($sql); if (!$result) { print $conn->ErrorMsg(); exit; } $rows = $result->fields; if ($rows != "") { unset($myArray); foreach($result as $values) { $plate = $values['plate']; $new_platelist[] = $plate; $inDateTime = $values['entry_datetime']; $outDateTime = $values['exit_datetime']; $tmp = explode(' ', $inDateTime); $inDate = $tmp[0]; $in_ts = strtotime($inDateTime); $out_ts = strtotime($outDateTime); $duration = $out_ts - $in_ts; $dur_array = intToDateArray($duration); $dur_string = ''; if ($dur_array['days'] > 0) { $dur_string .= $dur_array['days'] . ' days '; } if ($dur_array['hours'] > 0) { $dur_string .= $dur_array['hours'] . ' hours '; } if ($dur_array['mins'] > 0) { $dur_string .= $dur_array['mins'] . ' minutes '; } if ($dur_array['secs'] > 0) { $dur_string .= $dur_array['secs'] . ' secs '; } $myArray[$plate][] = array($inDateTime, $outDateTime, $inDate, $dur_string); } } while ($startLoopDate < $endLoopDate) { $dayString = strftime("%a, %d %B %Y", $startLoopDate); $dayCheck = strftime("%Y-%m-%d", $startLoopDate); print "<h2>$dayString</h2>"; print "<table width=\"100%\">"; print " <tr>"; print " <th>VRM</th>"; print " <th>Permit Group</th>"; print " <th>Entry Time</th>"; print " <th>Exit Time</th>"; print " <th>Duration</th>"; print " </tr>"; foreach($new_platelist as $wlPlate) { if ($myArray[$wlPlate][0][2] == $dayCheck) { print "<tr>"; print "<td>$wlPlate</td>"; if (isset($myArray[$wlPlate])) { print "<td>".$staticWhitelistArray[$wlPlate]['groupname']."</td>"; print "<td>".$myArray[$wlPlate][0][0]."</td>"; print "<td>".$myArray[$wlPlate][0][1]."</td>"; print "<td>".$myArray[$wlPlate][0][3]."</td>"; } else { print "<td>Vehicle Not Seen</td>"; print "<td>Vehicle Not Seen</td>"; print "<td>Vehicle Not Seen</td>"; } print "</tr>"; } } print "</table>"; $startLoopDate = $startLoopDate + $oneDay; } } }

    Read the article

  • WPF TreeView MouseDown

    - by imekon
    I've got something like this in a TreeView: <DataTemplate x:Key="myTemplate"> <StackPanel MouseDown="OnItemMouseDown"> ... </StackPanel> </DataTemplate> Using this I get the mouse down events if I click on items in the stack panel. However... there seems to be another item behind the stack panel that is the TreeViewItem - it's very hard to hit, but not impossible, and that's when the problems start to occur. I had a go at handling PreviewMouseDown on TreeViewItem, however that seems to require e.Handled = false otherwise standard tree view behaviour stops working. Ok, Here's the source code... MainWindow.xaml <Window x:Class="WPFMultiSelectTree.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WPFMultiSelectTree" Title="Multiple Selection Tree" Height="300" Width="300"> <Window.Resources> <!-- Declare the classes that convert bool to Visibility --> <local:VisibilityConverter x:Key="visibilityConverter"/> <local:VisibilityInverter x:Key="visibilityInverter"/> <!-- Set the style for any tree view item --> <Style TargetType="TreeViewItem"> <Style.Triggers> <DataTrigger Binding="{Binding Selected}" Value="True"> <Setter Property="Background" Value="DarkBlue"/> <Setter Property="Foreground" Value="White"/> </DataTrigger> </Style.Triggers> <EventSetter Event="PreviewMouseDown" Handler="OnTreePreviewMouseDown"/> </Style> <!-- Declare a hierarchical data template for the tree view items --> <HierarchicalDataTemplate x:Key="RecursiveTemplate" ItemsSource="{Binding Children}"> <StackPanel Margin="2" Orientation="Horizontal" MouseDown="OnTreeMouseDown"> <Ellipse Width="12" Height="12" Fill="Green"/> <TextBlock Margin="2" Text="{Binding Name}" Visibility="{Binding Editing, Converter={StaticResource visibilityInverter}}"/> <TextBox Margin="2" Text="{Binding Name}" KeyDown="OnTextBoxKeyDown" IsVisibleChanged="OnTextBoxIsVisibleChanged" Visibility="{Binding Editing, Converter={StaticResource visibilityConverter}}"/> <TextBlock Margin="2" Text="{Binding Index, StringFormat=({0})}"/> </StackPanel> </HierarchicalDataTemplate> <!-- Declare a simple template for a list box --> <DataTemplate x:Key="ListTemplate"> <TextBlock Text="{Binding Name}"/> </DataTemplate> </Window.Resources> <Grid> <!-- Declare the rows in this grid --> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition/> <RowDefinition Height="Auto"/> <RowDefinition/> </Grid.RowDefinitions> <!-- The first header --> <TextBlock Grid.Row="0" Margin="5" Background="PowderBlue">Multiple selection tree view</TextBlock> <!-- The tree view --> <TreeView Name="m_tree" Margin="2" Grid.Row="1" ItemsSource="{Binding Children}" ItemTemplate="{StaticResource RecursiveTemplate}"/> <!-- The second header --> <TextBlock Grid.Row="2" Margin="5" Background="PowderBlue">The currently selected items in the tree</TextBlock> <!-- The list box --> <ListBox Name="m_list" Margin="2" Grid.Row="3" ItemsSource="{Binding .}" ItemTemplate="{StaticResource ListTemplate}"/> </Grid> </Window> MainWindow.xaml.cs /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { private Container m_root; private Container m_first; private ObservableCollection<Container> m_selection; private string m_current; /// <summary> /// Constructor /// </summary> public MainWindow() { InitializeComponent(); m_selection = new ObservableCollection<Container>(); m_root = new Container("root"); for (int parents = 0; parents < 50; parents++) { Container parent = new Container(String.Format("parent{0}", parents + 1)); for (int children = 0; children < 1000; children++) { parent.Add(new Container(String.Format("child{0}", children + 1))); } m_root.Add(parent); } m_tree.DataContext = m_root; m_list.DataContext = m_selection; m_first = null; } /// <summary> /// Has the shift key been pressed? /// </summary> private bool ShiftPressed { get { return Keyboard.IsKeyDown(Key.LeftShift) || Keyboard.IsKeyDown(Key.RightShift); } } /// <summary> /// Has the control key been pressed? /// </summary> private bool CtrlPressed { get { return Keyboard.IsKeyDown(Key.LeftCtrl) || Keyboard.IsKeyDown(Key.RightCtrl); } } /// <summary> /// Clear down the selection list /// </summary> private void DeselectAndClear() { foreach(Container container in m_selection) { container.Selected = false; } m_selection.Clear(); } /// <summary> /// Add the container to the list (if not already present), /// mark as selected /// </summary> /// <param name="container"></param> private void AddToSelection(Container container) { if (container == null) { return; } foreach (Container child in m_selection) { if (child == container) { return; } } container.Selected = true; m_selection.Add(container); } /// <summary> /// Remove container from list, mark as not selected /// </summary> /// <param name="container"></param> private void RemoveFromSelection(Container container) { m_selection.Remove(container); container.Selected = false; } /// <summary> /// Process single click on a tree item /// /// Normally just select an item /// /// SHIFT-Click extends selection /// CTRL-Click toggles a selection /// </summary> /// <param name="sender"></param> private void OnTreeSingleClick(object sender) { FrameworkElement element = sender as FrameworkElement; if (element != null) { Container container = element.DataContext as Container; if (container != null) { if (CtrlPressed) { if (container.Selected) { RemoveFromSelection(container); } else { AddToSelection(container); } } else if (ShiftPressed) { if (container.Parent == m_first.Parent) { if (container.Index < m_first.Index) { Container item = container; for (int i = container.Index; i < m_first.Index; i++) { AddToSelection(item); item = item.Next; if (item == null) { break; } } } else if (container.Index > m_first.Index) { Container item = m_first; for (int i = m_first.Index; i <= container.Index; i++) { AddToSelection(item); item = item.Next; if (item == null) { break; } } } } } else { DeselectAndClear(); m_first = container; AddToSelection(container); } } } } /// <summary> /// Process double click on tree item /// </summary> /// <param name="sender"></param> private void OnTreeDoubleClick(object sender) { FrameworkElement element = sender as FrameworkElement; if (element != null) { Container container = element.DataContext as Container; if (container != null) { container.Editing = true; m_current = container.Name; } } } /// <summary> /// Clicked on the stack panel in the tree view /// /// Double left click: /// /// Switch to editing mode (flips visibility of textblock and textbox) /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTreeMouseDown(object sender, MouseButtonEventArgs e) { Debug.WriteLine("StackPanel mouse down"); switch(e.ChangedButton) { case MouseButton.Left: switch (e.ClickCount) { case 2: OnTreeDoubleClick(sender); e.Handled = true; break; } break; } } /// <summary> /// Clicked on tree view item in tree /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTreePreviewMouseDown(object sender, MouseButtonEventArgs e) { Debug.WriteLine("TreeViewItem preview mouse down"); switch (e.ChangedButton) { case MouseButton.Left: switch (e.ClickCount) { case 1: { // We've had a single click on a tree view item // Unfortunately this is the WHOLE tree item, including the +/- // symbol to the left. The tree doesn't do a selection, so we // have to filter this out... MouseDevice device = e.Device as MouseDevice; Debug.WriteLine(String.Format("Tree item clicked on: {0}", device.DirectlyOver.GetType().ToString())); // This is bad. The whole point of WPF is for the code // not to know what the UI has - yet here we are testing for // it as a workaround. Sigh... if (device.DirectlyOver.GetType() != typeof(Path)) { OnTreeSingleClick(sender); } // Cannot say handled - if we do it stops the tree working! //e.Handled = true; } break; } break; } } /// <summary> /// Key press in text box /// /// Return key finishes editing /// Escape key finishes editing, restores original value (this doesn't work!) /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTextBoxKeyDown(object sender, KeyEventArgs e) { switch(e.Key) { case Key.Return: { TextBox box = sender as TextBox; if (box != null) { Container container = box.DataContext as Container; if (container != null) { container.Editing = false; e.Handled = true; } } } break; case Key.Escape: { TextBox box = sender as TextBox; if (box != null) { Container container = box.DataContext as Container; if (container != null) { container.Editing = false; container.Name = m_current; e.Handled = true; } } } break; } } /// <summary> /// When text box becomes visible, grab focus and select all text in it. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnTextBoxIsVisibleChanged(object sender, DependencyPropertyChangedEventArgs e) { bool visible = (bool)e.NewValue; if (visible) { TextBox box = sender as TextBox; if (box != null) { box.Focus(); box.SelectAll(); } } } } Here's the Container class public class Container : INotifyPropertyChanged { private string m_name; private ObservableCollection<Container> m_children; private Container m_parent; private bool m_selected; private bool m_editing; /// <summary> /// Constructor /// </summary> /// <param name="name">name of object</param> public Container(string name) { m_name = name; m_children = new ObservableCollection<Container>(); m_parent = null; m_selected = false; m_editing = false; } /// <summary> /// Name of object /// </summary> public string Name { get { return m_name; } set { if (m_name != value) { m_name = value; OnPropertyChanged("Name"); } } } /// <summary> /// Index of object in parent's children /// /// If there's no parent, the index is -1 /// </summary> public int Index { get { if (m_parent != null) { return m_parent.Children.IndexOf(this); } return -1; } } /// <summary> /// Get the next item, assuming this is parented /// /// Returns null if end of list reached, or no parent /// </summary> public Container Next { get { if (m_parent != null) { int index = Index + 1; if (index < m_parent.Children.Count) { return m_parent.Children[index]; } } return null; } } /// <summary> /// List of children /// </summary> public ObservableCollection<Container> Children { get { return m_children; } } /// <summary> /// Selected status /// </summary> public bool Selected { get { return m_selected; } set { if (m_selected != value) { m_selected = value; OnPropertyChanged("Selected"); } } } /// <summary> /// Editing status /// </summary> public bool Editing { get { return m_editing; } set { if (m_editing != value) { m_editing = value; OnPropertyChanged("Editing"); } } } /// <summary> /// Parent of this object /// </summary> public Container Parent { get { return m_parent; } set { m_parent = value; } } /// <summary> /// WPF Property Changed event /// </summary> public event PropertyChangedEventHandler PropertyChanged; /// <summary> /// Handler to inform WPF that a property has changed /// </summary> /// <param name="name"></param> private void OnPropertyChanged(string name) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(name)); } } /// <summary> /// Add a child to this container /// </summary> /// <param name="child"></param> public void Add(Container child) { m_children.Add(child); child.m_parent = this; } /// <summary> /// Remove a child from this container /// </summary> /// <param name="child"></param> public void Remove(Container child) { m_children.Remove(child); child.m_parent = null; } } The two classes VisibilityConverter and VisibilityInverter are implementations of IValueConverter that translates bool to Visibility. They make sure the TextBlock is displayed when not editing, and the TextBox is displayed when editing.

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • Django CMS - not able to upload images through cmsplugin_filer_image

    - by Luke
    i have a problem with a local installation on django cms 2.3.3: i've installed it trough pip, in a separated virtualenv. next i followed the tutorial for settings.py configuration, i started the server. Then in the admin i created an page (home), and i've tried to add an image in the placeholder through the cmsplugin_filer_image, but the upload seems that doesn't work. here's my settings.py: # Django settings for cms1 project. # -*- coding: utf-8 -*- import os gettext = lambda s: s PROJECT_PATH = os.path.abspath(os.path.dirname(__file__)) DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', '[email protected]'), ) MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'cms1', # Or path to database file if using sqlite3. 'USER': 'cms', # Not used with sqlite3. 'PASSWORD': 'cms', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } # Local time zone for this installation. Choices can be found here: # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name # although not all choices may be available on all operating systems. # In a Windows environment this must be set to your system time zone. TIME_ZONE = 'Europe/Rome' # Language code for this installation. All choices can be found here: # http://www.i18nguy.com/unicode/language-identifiers.html LANGUAGE_CODE = 'it-it' SITE_ID = 1 # If you set this to False, Django will make some optimizations so as not # to load the internationalization machinery. USE_I18N = True # If you set this to False, Django will not format dates, numbers and # calendars according to the current locale. USE_L10N = True # If you set this to False, Django will not use timezone-aware datetimes. USE_TZ = True # Absolute filesystem path to the directory that will hold user-uploaded files. # Example: "/home/media/media.lawrence.com/media/" MEDIA_ROOT = os.path.join(PROJECT_PATH, "media") # URL that handles the media served from MEDIA_ROOT. Make sure to use a # trailing slash. # Examples: "http://media.lawrence.com/media/", "http://example.com/media/" MEDIA_URL = '/media/' # Absolute path to the directory static files should be collected to. # Don't put anything in this directory yourself; store your static files # in apps' "static/" subdirectories and in STATICFILES_DIRS. # Example: "/home/media/media.lawrence.com/static/" STATIC_ROOT = os.path.join(PROJECT_PATH, "static") STATIC_URL = "/static/" # Additional locations of static files STATICFILES_DIRS = ( os.path.join(PROJECT_PATH, "static_auto"), # Put strings here, like "/home/html/static" or "C:/www/django/static". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) # List of finder classes that know how to find static files in # various locations. STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', # 'django.contrib.staticfiles.finders.DefaultStorageFinder', ) # Make this unique, and don't share it with anybody. SECRET_KEY = '^c2q3d8w)f#gk%5i)(#i*lwt%lm-!2=(*1d!1cf+rg&amp;-hqi_9u' # List of callables that know how to import templates from various sources. TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', # 'django.template.loaders.eggs.Loader', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'cms.middleware.multilingual.MultilingualURLMiddleware', 'cms.middleware.page.CurrentPageMiddleware', 'cms.middleware.user.CurrentUserMiddleware', 'cms.middleware.toolbar.ToolbarMiddleware', # Uncomment the next line for simple clickjacking protection: # 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) ROOT_URLCONF = 'cms1.urls' # Python dotted path to the WSGI application used by Django's runserver. WSGI_APPLICATION = 'cms1.wsgi.application' TEMPLATE_DIRS = ( os.path.join(PROJECT_PATH, "templates"), # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) CMS_TEMPLATES = ( ('template_1.html', 'Template One'), ('template_2.html', 'Template Two'), ) TEMPLATE_CONTEXT_PROCESSORS = ( 'django.contrib.auth.context_processors.auth', 'django.core.context_processors.i18n', 'django.core.context_processors.request', 'django.core.context_processors.media', 'django.core.context_processors.static', 'cms.context_processors.media', 'sekizai.context_processors.sekizai', ) LANGUAGES = [ ('it', 'Italiano'), ('en', 'English'), ] INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'cms', #django CMS itself 'mptt', #utilities for implementing a modified pre-order traversal tree 'menus', #helper for model independent hierarchical website navigation 'south', #intelligent schema and data migrations 'sekizai', #for javascript and css management #'cms.plugins.file', 'cms.plugins.flash', 'cms.plugins.googlemap', 'cms.plugins.link', #'cms.plugins.picture', 'cms.plugins.snippet', 'cms.plugins.teaser', 'cms.plugins.text', #'cms.plugins.video', 'cms.plugins.twitter', 'filer', 'cmsplugin_filer_file', 'cmsplugin_filer_folder', 'cmsplugin_filer_image', 'cmsplugin_filer_teaser', 'cmsplugin_filer_video', 'easy_thumbnails', 'PIL', # Uncomment the next line to enable the admin: 'django.contrib.admin', # Uncomment the next line to enable admin documentation: # 'django.contrib.admindocs', ) # A sample logging configuration. The only tangible logging # performed by this configuration is to send an email to # the site admins on every HTTP 500 error when DEBUG=False. # See http://docs.djangoproject.com/en/dev/topics/logging for # more details on how to customize your logging configuration. LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse' } }, 'handlers': { 'mail_admins': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler' } }, 'loggers': { 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True, }, } } when i try to upload an image, in the clipboard section i don't have the thumbnail, but just an 'undefined' message: and this is the runserver console while trying to upload: [20/Oct/2012 15:15:56] "POST /admin/filer/clipboard/operations/upload/?qqfile=29708_1306856312320_7706073_n.jpg HTTP/1.1" 500 248133 [20/Oct/2012 15:15:56] "GET /it/admin/filer/folder/unfiled_images/undefined HTTP/1.1" 301 0 [20/Oct/2012 15:15:56] "GET /it/admin/filer/folder/unfiled_images/undefined/ HTTP/1.1" 404 1739 Also, this is project filesystem: cms1 +-- cms1 ¦   +-- __init__.py ¦   +-- __init__.pyc ¦   +-- media ¦   ¦   +-- filer_public ¦   ¦   +-- 2012 ¦   ¦   +-- 10 ¦   ¦   +-- 20 ¦   ¦   +-- 29708_1306856312320_7706073_n_1.jpg ¦   ¦   +-- 29708_1306856312320_7706073_n_2.jpg ¦   ¦   +-- 29708_1306856312320_7706073_n_3.jpg ¦   ¦   +-- 29708_1306856312320_7706073_n_4.jpg ¦   ¦   +-- 29708_1306856312320_7706073_n_5.jpg ¦   ¦   +-- 29708_1306856312320_7706073_n_6.jpg ¦   ¦   +-- 29708_1306856312320_7706073_n_7.jpg ¦   ¦   +-- 29708_1306856312320_7706073_n.jpg ¦   ¦   +-- torrent-client-macosx.jpg ¦   +-- settings.py ¦   +-- settings.pyc ¦   +-- static ¦   +-- static_auto ¦   +-- static_manual ¦   +-- templates ¦   ¦   +-- base.html ¦   ¦   +-- template_1.html ¦   ¦   +-- template_2.html ¦   +-- urls.py ¦   +-- urls.pyc ¦   +-- wsgi.py ¦   +-- wsgi.pyc +-- manage.py So files are uploaded, but they are not accessible to cms. there's a similar question here, but doens't help me so much. It would be very helpful any help on this issue to me. Thanks, luke

    Read the article

  • Django doesn't refresh my request object when reloading the current page.

    - by Boris Rusev
    I have a Django web site which I want ot be viewable in different languages. Until this morning everything was working fine. Here is the deal. I go to my say About Us page and it is in English. Below it there is the change language button and when I press it everything "magically" translates to Bulgarian just the way I want it. On the other hand I have a JS menu from which the user is able to browse through the products. I click on 'T-Shirt' then a sub-menu opens bellow the previously pressed containing different categories - Men, Women, Children. The link guides me to a page where the exact clothes I have requested are listed. BUT... When I try to change the language THEN, nothing happens. I go to the Abouts Page, change the language from there, return to the clothes catalog and the language is changed... I will no paste some code. This is my change button code: function changeLanguage() { if (getCookie('language') == 'EN') { setCookie("language", 'BG'); } else { setCookie("language", 'EN'); } window.location.reload(); } These are my URL patterns: urlpatterns = patterns('', # Example: # (r'^enter_clothing/', include('enter_clothing.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^site_media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/home/boris/Projects/enter_clothing/templates/media', 'show_indexes': True}), (r'^$', 'enter_clothing.clothes_app.views.index'), (r'^home', 'enter_clothing.clothes_app.views.home'), (r'^products', 'enter_clothing.clothes_app.views.products'), (r'^orders', 'enter_clothing.clothes_app.views.orders'), (r'^aboutUs', 'enter_clothing.clothes_app.views.aboutUs'), (r'^contactUs', 'enter_clothing.clothes_app.views.contactUs'), (r'^admin/', include(admin.site.urls)), (r'^(\w+)/(\w+)/page=(\d+)', 'enter_clothing.clothes_app.views.displayClothes'), ) My About Us page: @base def aboutUs(request): return """<b>%s</b>""" % getTranslation("About Us Text", request.COOKIES['language']) The @base method: def base(myfunc): def inner_func(*args, **kwargs): try: args[0].COOKIES['language'] except: args[0].COOKIES['language'] = 'BG' resetGlobalVariables() initCollections(args[0]) categoriesByCollection = dict((collection, getCategoriesFromCollection(args[0], collection)) for collection in collections) if args[0].COOKIES['language'] == 'BG': for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (translateCategory(args[0], x), translateCollection(args[0], k), str(x)), v), "") else: for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (str(x), str(k), str(x)), v), "") contents = myfunc(*args, **kwargs) return render_to_response('index.html', {'title': title, 'categoriesByCollection': categoriesByCollection.iteritems(), 'keys': enumerate(keys), 'values': enumerate(values), 'contents': contents, 'btnHome':getTranslation("Home Button", args[0].COOKIES['language']), 'btnProducts':getTranslation("Products Button", args[0].COOKIES['language']), 'btnOrders':getTranslation("Orders Button", args[0].COOKIES['language']), 'btnAboutUs':getTranslation("About Us Button", args[0].COOKIES['language']), 'btnContacts':getTranslation("Contact Us Button", args[0].COOKIES['language']), 'btnChangeLanguage':getTranslation("Button Change Language", args[0].COOKIES['language'])}) return inner_func And the catalog page: @base def displayClothes(request, category, collection, page): clothesToDisplay = getClothesFromCollectionAndCategory(request, category, collection) contents = "" pageCount = len(clothesToDisplay) / ( rowCount * columnCount) + 1 matrixSize = rowCount * columnCount currentPage = str(page).replace("page=", "") currentPage = int(currentPage) - 1 #raise Exception(request) # this is for the clothes layout for x in range(currentPage * matrixSize, matrixSize * (currentPage + 1)): if x < len(clothesToDisplay): if request.COOKIES['language'] == 'EN': contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getEnglishHTML() else: contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getBulgarianHTML() if (x + 1) % columnCount == 0: contents += """<div class="clear"></div>""" contents += """<div class="clear"></div>""" # this is for the page links if pageCount > 1: for x in range(0, pageCount): if x == currentPage: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: black;">%s</span></a>""" % (category, collection, x + 1, x + 1) else: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: blue;">%s</span></a>""" % (category, collection, x + 1, x + 1) return """%s""" % (contents) Let me explain that you needn't be alarmed by the large quantities of code I have posted. You don't have to understand it or even look at all of it. I've published it just in case because I really can't understand the origins of the bug. Now this is how I have narrowed the problem. I am debuging with "raise Exception(request)" every time I want to know what's inside my request object. When I place this in my aboutUs method, the language cookie value changes every time I press the language button. But NOT when I am in the displayClothes method. There the language stays the same. Also I tried putting the exception line in the beginning of the @base method. It turns out the situation there is exactly the same. When I am in my About Us page and click on the button, the language in my request object changes, but when I press the button while in the catalog page it remains unchanged. That is all I could find, and I have no idea as to how Django distinguishes my pages and in what way. P.S. The JavaScript I think works perfectly, I have tested it in multiple ways. Thank you, I hope some of you will read this enormous post, and don't hesitate to ask for more code excerpts.

    Read the article

  • LINQ 4 XML - What is the proper way to query deep in the tree structure?

    - by Keith Barrows
    I have an XML structure that is 4 deep: <?xml version="1.0" encoding="utf-8"?> <EmailRuleList xmlns:xsd="EmailRules.xsd"> <TargetPST name="Tech Communities"> <Parse emailAsList="true" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="@aspadvice.com" folder="Lists, ASP" saveAttachments="false" /> <EmailRule address="@sqladvice.com" folder="Lists, SQL" saveAttachments="false" /> <EmailRule address="@xmladvice.com" folder="Lists, XML" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Special Interest Groups|Northern Colorado Architects Group" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|SpamBayes" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Support|GoDaddy" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|No-IP.com" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Discussions|Orchard Project" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@agilejournal.com" folder="Newsletters|Agile Journal" saveAttachments="false"/> <EmailRule address="@axosoft.ccsend.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@axosoft.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@cmcrossroads.com" folder="Newsletters|CM Crossroads" saveAttachments="false" /> <EmailRule address="@urbancode.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@urbancode.ccsend.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@Infragistics.com" folder="Newsletters|Infragistics" saveAttachments="false" /> <EmailRule address="@zdnet.online.com" folder="Newsletters|ZDNet Tech Update Today" saveAttachments="false" /> <EmailRule address="@sqlservercentral.com" folder="Newsletters|SQLServerCentral.com" saveAttachments="false" /> <EmailRule address="@simple-talk.com" folder="Newsletters|Simple-Talk Newsletter" saveAttachments="false" /> </Parse> </TargetPST> <TargetPST name="[Sharpen the Saw]"> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|LinkedIn USMC" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|Cruise Critic" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@moody.edu" folder="Social|5 Love Languages" saveAttachments="false" /> <EmailRule address="@postmaster.twitter.com" folder="Social|Twitter" saveAttachments="false"/> <EmailRule address="@diabetes.org" folder="Physical|American Diabetes Association" saveAttachments="false"/> <EmailRule address="@membership.webshots.com" folder="Social|Webshots" saveAttachments="false"/> </Parse> </TargetPST> </EmailRuleList> Now, I have both an FromAddress and a ToAddress that is parsed from an incoming email. I would like to do a LINQ query against a class set that was deserialized from this XML. For instance: ToAddress = [email protected] FromAddress = [email protected] Query: Get EmailRule.Include(Parse).Include(TargetPST) where address == ToAddress AND Parse.ToAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [ToAddress Domain Only] AND Parse.ToAddress==true AND Parse.useJustDomain==true Get EmailRule.Include(Parse).Include(TargetPST) where address == FromAddress AND Parse.FromAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [FromAddress Domain Only] AND Parse.FromAddress==true AND Parse.useJustDomain==true I am having a hard time figuring this LINQ query out. I can, of course, loop on all the bits in the XML like so (includes deserialization into objects): XmlSerializer s = new XmlSerializer(typeof(EmailRuleList)); TextReader r = new StreamReader(path); _emailRuleList = (EmailRuleList)s.Deserialize(r); TargetPST[] PSTList = _emailRuleList.Items; foreach (TargetPST targetPST in PSTList) { olRoot = GetRootFolder(targetPST.name); if (olRoot != null) { Parse[] ParseList = targetPST.Items; foreach (Parse parseRules in ParseList) { EmailRule[] EmailRuleList = parseRules.Items; foreach (EmailRule targetFolders in EmailRuleList) { } } } } However, this means going through all these loops for each and every address. It makes more sense to me to query against the Objects. Any tips appreciated!

    Read the article

  • Scala n00b: Critique my code

    - by Peter
    G'day everyone, I'm a Scala n00b (but am experienced with other languages) and am learning the language as I find time - very much enjoying it so far! Usually when learning a new language the first thing I do is implement Conway's Game of Life, since it's just complex enough to give a good sense of the language, but small enough in scope to be able to whip up in a couple of hours (most of which is spent wrestling with syntax). Anyhoo, having gone through this exercise with Scala I was hoping the Scala gurus out there might take a look at the code I've ended up with and provide feedback on it. I'm after anything - algorithmic improvements (particularly concurrent solutions!), stylistic improvements, alternative APIs or language constructs, disgust at the length of my function names - whatever feedback you've got, I'm keen to hear it! You should be able to run the following script via "scala GameOfLife.scala" - by default it will run a 20x20 board with a single glider on it - please feel free to experiment. // CONWAY'S GAME OF LIFE (SCALA) abstract class GameOfLifeBoard(val aliveCells : Set[Tuple2[Int, Int]]) { // Executes a "time tick" - returns a new board containing the next generation def tick : GameOfLifeBoard // Is the board empty? def empty : Boolean = aliveCells.size == 0 // Is the given cell alive? protected def alive(cell : Tuple2[Int, Int]) : Boolean = aliveCells contains cell // Is the given cell dead? protected def dead(cell : Tuple2[Int, Int]) : Boolean = !alive(cell) } class InfiniteGameOfLifeBoard(aliveCells : Set[Tuple2[Int, Int]]) extends GameOfLifeBoard(aliveCells) { // Executes a "time tick" - returns a new board containing the next generation override def tick : GameOfLifeBoard = new InfiniteGameOfLifeBoard(nextGeneration) // The next generation of this board protected def nextGeneration : Set[Tuple2[Int, Int]] = aliveCells flatMap neighbours filter shouldCellLiveInNextGeneration // Should the given cell should live in the next generation? protected def shouldCellLiveInNextGeneration(cell : Tuple2[Int, Int]) : Boolean = (alive(cell) && (numberOfAliveNeighbours(cell) == 2 || numberOfAliveNeighbours(cell) == 3)) || (dead(cell) && numberOfAliveNeighbours(cell) == 3) // The number of alive neighbours for the given cell protected def numberOfAliveNeighbours(cell : Tuple2[Int, Int]) : Int = aliveNeighbours(cell) size // Returns the alive neighbours for the given cell protected def aliveNeighbours(cell : Tuple2[Int, Int]) : Set[Tuple2[Int, Int]] = aliveCells intersect neighbours(cell) // Returns all neighbours (whether dead or alive) for the given cell protected def neighbours(cell : Tuple2[Int, Int]) : Set[Tuple2[Int, Int]] = Set((cell._1-1, cell._2-1), (cell._1, cell._2-1), (cell._1+1, cell._2-1), (cell._1-1, cell._2), (cell._1+1, cell._2), (cell._1-1, cell._2+1), (cell._1, cell._2+1), (cell._1+1, cell._2+1)) // Information on where the currently live cells are protected def xVals = aliveCells map { cell => cell._1 } protected def xMin = (xVals reduceLeft (_ min _)) - 1 protected def xMax = (xVals reduceLeft (_ max _)) + 1 protected def xRange = xMin until xMax + 1 protected def yVals = aliveCells map { cell => cell._2 } protected def yMin = (yVals reduceLeft (_ min _)) - 1 protected def yMax = (yVals reduceLeft (_ max _)) + 1 protected def yRange = yMin until yMax + 1 // Returns a simple graphical representation of this board override def toString : String = { var result = "" for (y <- yRange) { for (x <- xRange) { if (alive (x,y)) result += "# " else result += ". " } result += "\n" } result } // Equality stuff override def equals(other : Any) : Boolean = { other match { case that : InfiniteGameOfLifeBoard => (that canEqual this) && that.aliveCells == this.aliveCells case _ => false } } def canEqual(other : Any) : Boolean = other.isInstanceOf[InfiniteGameOfLifeBoard] override def hashCode = aliveCells.hashCode } class FiniteGameOfLifeBoard(val boardWidth : Int, val boardHeight : Int, aliveCells : Set[Tuple2[Int, Int]]) extends InfiniteGameOfLifeBoard(aliveCells) { override def tick : GameOfLifeBoard = new FiniteGameOfLifeBoard(boardWidth, boardHeight, nextGeneration) // Determines the coordinates of all of the neighbours of the given cell override protected def neighbours(cell : Tuple2[Int, Int]) : Set[Tuple2[Int, Int]] = super.neighbours(cell) filter { cell => cell._1 >= 0 && cell._1 < boardWidth && cell._2 >= 0 && cell._2 < boardHeight } // Information on where the currently live cells are override protected def xRange = 0 until boardWidth override protected def yRange = 0 until boardHeight // Equality stuff override def equals(other : Any) : Boolean = { other match { case that : FiniteGameOfLifeBoard => (that canEqual this) && that.boardWidth == this.boardWidth && that.boardHeight == this.boardHeight && that.aliveCells == this.aliveCells case _ => false } } override def canEqual(other : Any) : Boolean = other.isInstanceOf[FiniteGameOfLifeBoard] override def hashCode : Int = { 41 * ( 41 * ( 41 + super.hashCode ) + boardHeight.hashCode ) + boardWidth.hashCode } } class GameOfLife(initialBoard: GameOfLifeBoard) { // Run the game of life until the board is empty or the exact same board is seen twice // Important note: this method does NOT necessarily terminate!! def go : Unit = { var currentBoard = initialBoard var previousBoards = List[GameOfLifeBoard]() while (!currentBoard.empty && !(previousBoards contains currentBoard)) { print(27.toChar + "[2J") // ANSI: clear screen print(27.toChar + "[;H") // ANSI: move cursor to top left corner of screen println(currentBoard.toString) Thread.sleep(75) // Warning: unbounded list concatenation can result in OutOfMemoryExceptions ####TODO: replace with LRU bounded list previousBoards = List(currentBoard) ::: previousBoards currentBoard = currentBoard tick } // Print the final board print(27.toChar + "[2J") // ANSI: clear screen print(27.toChar + "[;H") // ANSI: move cursor to top left corner of screen println(currentBoard.toString) } } // Script starts here val simple = Set((1,1)) val square = Set((4,4), (4,5), (5,4), (5,5)) val glider = Set((2,1), (3,2), (1,3), (2,3), (3,3)) val initialBoard = glider (new GameOfLife(new FiniteGameOfLifeBoard(20, 20, initialBoard))).go //(new GameOfLife(new InfiniteGameOfLifeBoard(initialBoard))).go // COPYRIGHT PETER MONKS 2010 Thanks! Peter

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

< Previous Page | 901 902 903 904 905 906 907 908 909 910 911 912  | Next Page >