Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 299/1320 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Game development: “Play Now” via website vs. download & install

    - by Inside
    Heyo, I've spent some time looking over the various threads here on gamedev and also on the regular stackoverflow and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). For simple & short games the newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "ok, I'm going to download and play that"? Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? Thanks! (* With the exception of flash-based engines it seems like most of the other approaches have these downsides when compared to what is available for desktop-based environments. I'd go with flash, but I'm worried that flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.)

    Read the article

  • The Application Architecture Domain

    - by Michael Glas
    I have been spending a lot of time thinking about Application Architecture in the context of EA. More specifically, as an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?There are several definitions of Application Architecture. TOGAF says “The objective here [in Application Architecture] is to define the major kinds of application system necessary to process the data and support the business”. FEA says the Application Architecture “Defines the applications needed to manage the data and support the business functions”.I agree with these definitions. They reflect what the Application Architecture domain does. However, they need to be decomposed to be practical.I find it useful to define a set of views into the Application Architecture domain. These views reflect what an EA needs to consider when working with/in the Applications Architecture domain. These viewpoints are, at a high level:Capability View: This view reflects how applications alignment with business capabilities. It is a super set of the following views when viewed in aggregate. By looking at the Application Architecture domain in terms of the business capabilities it supports, you get a good perspective on how those applications are directly supporting the business.Technology View: The technology view reflects the underlying technology that makes up the applications. Based on the number of rationalization activities I have seen (more specifically application rationalization), the phrase “complexity equals cost” drives the importance of the technology view, especially when attempting to reduce that complexity through standardization type activities. Some of the technology components to be considered are: Software: The application itself as well as the software the application relies on to function (web servers, application servers). Infrastructure: The underlying hardware and network components required by the application and supporting application software. Development: How the application is created and maintained. This encompasses development components that are part of the application itself (i.e. customizable functions), as well as bolt on development through web services, API’s, etc. The maintenance process itself also falls under this view. Integration: The interfaces that the application provides for integration as well as the integrations to other applications and data sources the application requires to function. Type: Reflects the kind of application (mash-up, 3 tiered, etc). (Note: functional type [CRM, HCM, etc.] are reflected under the capability view). Organization View: Organizations are comprised of people and those people use applications to do their jobs. Trying to define the application architecture domain without taking the organization that will use/fund/change it into consideration is like trying to design a car without thinking about who will drive it (i.e. you may end up building a formula 1 car for a family of 5 that is really looking for a minivan). This view reflects the people aspect of the application. It includes: Ownership: Who ‘owns’ the application? This will usually reflect primary funding and utilization but not always. Funding: Who funds both the acquisition/creation as well as the on-going maintenance (funding to create/change/operate)? Change: Who can/does request changes to the application and what process to the follow? Utilization: Who uses the application, how often do they use it, and how do they use it? Support: Which organization is responsible for the on-going support of the application? Information View: Whether or not you subscribe to the view that “information drives the enterprise”, it is a fact that information is critical. The management, creation, and organization of that information are primary functions of enterprise applications. This view reflects how the applications are tied to information (or at a higher level – how the Application Architecture domain relates to the Information Architecture domain). It includes: Access: The application is the mechanism by which end users access information. This could be through a primary application (i.e. CRM application), or through an information access type application (a BI application as an example). Creation: Applications create data in order to provide information to end-users. (I.e. an application creates an order to be used by an end-user as part of the fulfillment process). Consumption: Describes the data required by applications to function (i.e. a product id is required by a purchasing application to create an order. Application Service View: Organizations today are striving to be more agile. As an EA, I need to provide an architecture that supports this agility. One of the primary ways to achieve the required agility in the application architecture domain is through the use of ‘services’ (think SOA, web services, etc.). Whether it is through building applications from the ground up utilizing services, service enabling an existing application, or buying applications that are already ‘service enabled’, compartmentalizing application functions for re-use helps enable flexibility in the use of those applications in support of the required business agility. The applications service view consists of: Services: Here, I refer to the generic definition of a service “a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage”. Functions: The activities within an application that are not available / applicable for re-use. This view is helpful when identifying duplication functions between applications that are not service enabled. Delivery Model View: It is hard to talk about EA today without hearing the terms ‘cloud’ or shared services.  Organizations are looking at the ways their applications are delivered for several reasons, to reduce cost (both CAPEX and OPEX), to improve agility (time to market as an example), etc.  From an EA perspective, where/how an application is deployed has impacts on the overall enterprise architecture. From integration concerns to SLA requirements to security and compliance issues, the Enterprise Architect needs to factor in how applications are delivered when designing the Enterprise Architecture. This view reflects how applications are delivered to end-users. The delivery model view consists of different types of delivery mechanisms/deployment options for applications: Traditional: Reflects non-cloud type delivery options. The most prevalent consists of an application running on dedicated hardware (usually specific to an environment) for a single consumer. Private Cloud: The application runs on infrastructure provisioned for exclusive use by a single organization comprising multiple consumers. Public Cloud: The application runs on infrastructure provisioned for open use by the general public. Hybrid: The application is deployed on two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. While by no means comprehensive, I find that applying these views to the application domain gives a good understanding of what an EA needs to consider when effecting changes to the Application Architecture domain.Finally, the application architecture domain is one of several architecture domains that an EA must consider when developing an overall Enterprise Architecture. The Oracle Enterprise Architecture Framework defines four Primary domains: Business Architecture, Application Architecture, Information Architecture, and Technology Architecture. Each domain links to the others either directly or indirectly at some point. Oracle links them at a high level as follows:Business Capabilities and/or Business Processes (Business Architecture), links to the Applications that enable the capability/process (Applications Architecture – COTS, Custom), links to the Information Assets managed/maintained by the Applications (Information Architecture), links to the technology infrastructure upon which all this runs (Technology Architecture - integration, security, BI/DW, DB infrastructure, deployment model). There are however, times when the EA needs to narrow focus to a particular domain for some period of time. These views help me to do just that.

    Read the article

  • Metro: Dynamically Switching Templates with a WinJS ListView

    - by Stephen.Walther
    Imagine that you want to display a list of products using the WinJS ListView control. Imagine, furthermore, that you want to use different templates to display different products. In particular, when a product is on sale, you want to display the product using a special “On Sale” template. In this blog entry, I explain how you can switch templates dynamically when displaying items with a ListView control. In other words, you learn how to use more than one template when displaying items with a ListView control. Creating the Data Source Let’s start by creating the data source for the ListView. Nothing special here – our data source is a list of products. Two of the products, Oranges and Apples, are on sale. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99, onSale: true }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44, onSale: true }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The file above is saved with the name products.js and referenced by the default.html page described below. Declaring the Templates and ListView Control Next, we need to declare the ListView control and the two Template controls which we will use to display template items. The markup below appears in the default.html file: <!-- Templates --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div id="productOnSaleTemplate" data-win-control="WinJS.Binding.Template"> <div class="product onSale"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, layout: { type: WinJS.UI.ListLayout } }"> </div> In the markup above, two Template controls are declared. The first template is used when rendering a normal product and the second template is used when rendering a product which is on sale. The second template, unlike the first template, includes the text “(On Sale!)”. The ListView control is bound to the data source which we created in the previous section. The ListView itemDataSource property is set to the value ListViewDemos.products.dataSource. Notice that we do not set the ListView itemTemplate property. We set this property in the default.js file. Switching Between Templates All of the magic happens in the default.js file. The default.js file contains the JavaScript code used to switch templates dynamically. Here’s the entire contents of the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll().then(function () { var productsListView = document.getElementById("productsListView"); productsListView.winControl.itemTemplate = itemTemplateFunction; });; } }; function itemTemplateFunction(itemPromise) { return itemPromise.then(function (item) { // Select either normal product template or on sale template var itemTemplate = document.getElementById("productItemTemplate"); if (item.data.onSale) { itemTemplate = document.getElementById("productOnSaleTemplate"); }; // Render selected template to DIV container var container = document.createElement("div"); itemTemplate.winControl.render(item.data, container); return container; }); } app.start(); })(); In the code above, a function is assigned to the ListView itemTemplate property with the following line of code: productsListView.winControl.itemTemplate = itemTemplateFunction;   The itemTemplateFunction returns a DOM element which is used for the template item. Depending on the value of the product onSale property, the DOM element is generated from either the productItemTemplate or the productOnSaleTemplate template. Using Binding Converters instead of Multiple Templates In the previous sections, I explained how you can use different templates to render normal products and on sale products. There is an alternative approach to displaying different markup for normal products and on sale products. Instead of creating two templates, you can create a single template which contains separate DIV elements for a normal product and an on sale product. The following default.html file contains a single item template and a ListView control bound to the template. <!-- Template --> <div id="productItemTemplate" data-win-control="WinJS.Binding.Template"> <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> </div> <!-- ListView --> <div id="productsListView" data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource: ListViewDemos.products.dataSource, itemTemplate: select('#productItemTemplate'), layout: { type: WinJS.UI.ListLayout } }"> </div> The first DIV element is used to render a normal product: <div class="product" data-win-bind="style.display: onSale ListViewDemos.displayNormalProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> The second DIV element is used to render an “on sale” product: <div class="product onSale" data-win-bind="style.display: onSale ListViewDemos.displayOnSaleProduct"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> (On Sale!) </div> Notice that both templates include a data-win-bind attribute. These data-win-bind attributes are used to show the “normal” template when a product is not on sale and show the “on sale” template when a product is on sale. These attributes set the Cascading Style Sheet display attribute to either “none” or “block”. The data-win-bind attributes take advantage of binding converters. The binding converters are defined in the default.js file: (function () { "use strict"; var app = WinJS.Application; app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { WinJS.UI.processAll(); } }; WinJS.Namespace.define("ListViewDemos", { displayNormalProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "none" : "block"; }), displayOnSaleProduct: WinJS.Binding.converter(function (onSale) { return onSale ? "block" : "none"; }) }); app.start(); })(); The ListViewDemos.displayNormalProduct binding converter converts the value true or false to the value “none” or “block”. The ListViewDemos.displayOnSaleProduct binding converter does the opposite; it converts the value true or false to the value “block” or “none” (Sadly, you cannot simply place a NOT operator before the onSale property in the binding expression – you need to create both converters). The end result is that you can display different markup depending on the value of the product onSale property. Either the contents of the first or second DIV element are displayed: Summary In this blog entry, I’ve explored two approaches to displaying different markup in a ListView depending on the value of a data item property. The bulk of this blog entry was devoted to explaining how you can assign a function to the ListView itemTemplate property which returns different templates. We created both a productItemTemplate and productOnSaleTemplate and displayed both templates with the same ListView control. We also discussed how you can create a single template and display different markup by using binding converters. The binding converters are used to set a DIV element’s display property to either “none” or “block”. We created a binding converter which displays normal products and a binding converter which displays “on sale” products.

    Read the article

  • Multi screens are not aligning properly

    - by Steve
    I am using (4) Samsung (SMB2230) screens. When I place my mouse on the first screen at the bottom right and pull it into the second screen the mouse jumps to the middle on the left of the second screen. Then it does the same on the 3rd and 4th screens. The biggest jump is between the 1st and 2nd then it tightens up between 2 and 3 then close between 3 and 4. This is a really pain because my mouse gets hung up and I have to move up and down to get it to change screens.I had (4) Dells screens before and this didn't happen. I'm using Windows 7

    Read the article

  • SQL Server v.Next ("Denali") : How a columnstore index is not like a normal index

    - by AaronBertrand
    At the end of my Denali presentation at SQL Saturday #65 in Vancouver, a member of the audience asked, "What makes a columnstore index different from a regular nonclustered index?" At the end of a busy day, I was at a loss for an answer, and I'll explain why. First, I'll briefly explain the basic, core, high-level functionality of a columnstore index (you can read a lot more details in this white paper ). Basically, instead of storing index data together on a page, it divvies up the data from each...(read more)

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • How to Handle frame rates and synchronizing screen repaints

    - by David Kroukamp
    I would first off say sorry if the title is worded incorrectly. Okay now let me give the scenario I'm creating a 2 player fighting game, An average battle will include a Map (moving/still) and 2 characters (which are rendered by redrawing a varying amount of sprites one after the other). Now at the moment I have a single game loop limiting me to a set number of frames per second (using Java): Timer timer = new Timer(0, new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { long beginTime; //The time when the cycle begun long timeDiff; //The time it took for the cycle to execute int sleepTime; //ms to sleep (< 0 if we're behind) int fps = 1000 / 40; beginTime = System.nanoTime() / 1000000; //execute loop to update check collisions and draw gameLoop(); //Calculate how long did the cycle take timeDiff = System.nanoTime() / 1000000 - beginTime; //Calculate sleep time sleepTime = fps - (int) (timeDiff); if (sleepTime > 0) {//If sleepTime > 0 we're OK ((Timer)e.getSource()).setDelay(sleepTime); } } }); timer.start(); in gameLoop() characters are drawn to the screen ( a character holds an array of images which consists of their current sprites) every gameLoop() call will change the characters current sprite to the next and loop if the end is reached. But as you can imagine if a sprite is only 3 images in length than calling gameLoop() 40 times will cause the characters movement to be drawn 40/3=13 times. This causes a few minor anomilies in the sprited for some charcters So my question is how would I go about delivering a set amount of frames per second in when I have 2 characters on screen with varying amount of sprites?

    Read the article

  • Does Google submit HTML forms?

    - by Saeed Neamati
    I have a web page, say http://domain/purchase and in this page, I have a web form. User, on submitting this form (which has validation, both client-side and server side and won't be validated until fields are filled appropriately), would be redirected to another page, where (s)he can choose other things, and specify other settings and then purchase our product. Say the second page is http://domain/options. So, user comes to our site and visits http://domain/purchase, fills the form, submits it, and then would be redirected to the second page, http://doamin/options?parameter1=value1&parameter2=value2, which contains parameters from the first page. This is very common in passing parameters between web pages (or technically, between URLs). Now I was reviewing my website, and saw that Google had indexed some of my redirected web pages and URLs, like: http://domain/options?parameter1=value1&parameter2=value2 http://domain/options?parameter1=value3&parameter2=value4 http://domain/options?parameter1=value5&parameter2=value6 http://domain/options?parameter1=value7&parameter2=value8 http://domain/options?parameter1=value9&parameter2=value10 This means that Google Bot has visited our http://domain/purchase page, and has filled our form, and has submitted it, and was being redirected to the other URL, with corresponding parameters. This is the only way that makes sense to me. Does Google really fills forms? PS: All parameters are meaningful, meaning that they are not filled arbitrarily. For example, the phone parameter in indexed pages has correct phone numbers. How is it possible?

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • How to create a chained differencing disk of another differencing disk in Virtual Box?

    - by WooYek
    How to create a differencing disk (a chained one) from a disk that is already a differencing image? I would like to have: W2008 (base immutable) - W2008+SQL2008 (differencing, with SQL installed) --- This I can do. - W2008+SQL2008+SharePoint (chained differencing with Sharepoint installed on top of SQL2008) There's some info about it the manual: http://www.virtualbox.org/manual/ch05.html#diffimages Differencing images can be chained. If another differencing image is created for a virtual disk that already has a differencing image, then it becomes a "grandchild" of the original parent. The first differencing image then becomes read-only as well, and write operations only go to the second-level differencing image. When reading from the virtual disk, VirtualBox needs to look into the second differencing image first, then into the first if the sector was not found, and then into the original image.* I don't get it...

    Read the article

  • Using WinSCP with SSH server and 2 machine hops

    - by Mike
    I'm on a windows machine using putty to ssh into my schools server. From there I need to "slogin -XY machine1" and then "slogin -XY machine2" . Ideally, I'd like to use WinSCP to connect and transfer files. I know I can do this by using two copies of PuTTY , one to ssh into the server and create a proxy, and the second PuTTY to connect to the proxy and login to machine1 and create a second proxy. I can then use winSCP to connect to the proxy server and login to machine2... Is there a simpler way of doing this?

    Read the article

  • SSAS DMVs: useful links

    - by Davide Mauri
    From time to time happens that I need to extract metadata informations from Analysis Services DMVS in order to quickly get an overview of the entire situation and/or drill down to detail level. As a memo I post the link I use most when need to get documentation on SSAS Objects Data DMVs: SSAS: Using DMV Queries to get Cube Metadata http://bennyaustin.wordpress.com/2011/03/01/ssas-dmv-queries-cube-metadata/ SSAS DMV (Dynamic Management View) http://dwbi1.wordpress.com/2010/01/01/ssas-dmv-dynamic-management-view/ Use Dynamic Management Views (DMVs) to Monitor Analysis Services http://msdn.microsoft.com/en-us/library/hh230820.aspx

    Read the article

  • Multiple External IP Ranges on a Juniper SSG5

    - by Sam
    I have a Juniper SSG 5 firewall in a datacenter. The first interface (eth0/0) has been assigned a static IP address and has three other addresses configured for VIP Nat. I have a static route configured at the lowest priority for 0.0.0.0/0 to my hosting company's gateway. Now I need to configure a second IP block. I have the IPs assigned to the second interface (eth0/1) which is in the same security zone and virtual router as the first. However, with this interface enabled I (a) can't initiate outbound sessions (browse the internet, ping, DNS lookup, etc) even though I can access servers behind the firewall just fine from the outside and (b) can't ping the management IP of the firewall/gateway. I've tried anything I can think of but I guess this is a little above my head. Could anyone point me in the right direction? Interfaces: ethernet0/0 xxx.xxx.242.4/29 Untrust Layer3 ethernet0/1 xxx.xxx.152.0/28 Untrust Layer3 Routes: http://i.stack.imgur.com/60s41.png

    Read the article

  • Perl - can't flush STDOUT or STDERR

    - by Jim Salter
    Perl 5.14 from stock Ubuntu Precise repos. Trying to write a simple wrapper to monitor progress on copying from one stream to another: use IO::Handle; while ($bufsize = read (SOURCE, $buffer, 1048576)) { STDERR->printflush ("Transferred $xferred of $sendsize bytes\n"); $xferred += $bufsize; print TARGET $buffer; } This does not perform as expected (writing a line each time the 1M buffer is read). I end up seeing the first line (with a blank value of $xferred), and then the 7th and 8th lines (on an 8MB transfer). Been pounding my brains out on this for hours - I've read the perldocs, I've read the classic "Suffering from Buffering" article, I've tried everything from select and $|++ to IO::Handle to binmode (STDERR, "::unix") to you name it. I've also tried flushing TARGET with each line using IO::Handle (TARGET-flush). No dice. Has anybody else ever encountered this? I don't have any ideas left. Sleeping one second "fixes" the problem, but obviously I don't want to sleep a second every time I read a buffer just so my progress will output on the screen! FWIW, the problem is exactly the same whether I'm outputting to STDERR or STDOUT.

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • Friday Fun: Ghosruns

    - by Asian Angel
    In this week’s game a huge ghost is on the loose and chasing after your group of humans. Can you successfully beat the Match-3 challenge on each level or will your group become this ghost’s newest friends for eternity? HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Stability of beta and daily-live

    - by Prateek
    I have some related questions: For 12.04 (and in general), are the beta releases more stable/less buggy than the daily builds? If the answer to 1 is yes, then if I install the beta and apt-get upgrade, will I remain at the "stability level" of the beta, or of the daily builds? At this point, is there any advantage to installing the beta over installing a daily image? (Background : I am debating whether to install 11.10 or 12.04 beta/daily on a new machine)

    Read the article

  • How to create a "retro" pixel shader for transformed 2D sprites that maintains pixel fidelity?

    - by David Gouveia
    The image below shows two sprites rendered with point sampling on top of a background: The left skull has no rotation/scaling applied to it, so every pixel matches perfectly with the background. The right skull is rotated/scaled, and this results in larger pixels that are no longer axis aligned. How could I develop a pixel shader that would render the transformed sprite on the right with axis aligned pixels of the same size as the rest of the scene? This might be related to how sprite scaling was implemented in old games such as Monkey Island, because that's the effect I'm trying to achieve, but with rotation added. Edit As per kaoD's suggestions, I tried to address the problem as a post-process. The easiest approach was to render to a separate render target first (downsampled to match the desired pixel size) and then upscale it when rendering a second time. It did address my requirements above. First I tried doing it Linear -> Point and the result was this: There's no distortion but the result looks blurred and it loses most of the highlights colors. In my opinion it breaks the retro look I needed. The second time I tried Point -> Point and the result was this: Despite the distortion, I think that might be good enough for my needs, although it does look better as a still image than in motion. To demonstrate, here's a video of the effect, although YouTube filtered the pixels out of it: http://youtu.be/hqokk58KFmI However, I'll leave the question open for a few more days in case someone comes up with a better sampling solution that maintains the crisp look while decreasing the amount of distortion when moving.

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • scheduled chkdsk on Windows 7 blinking the hard disk light every 5 seconds, what can I do?

    - by Jian Lin
    PG&E (the local electricity company) came and took out the old power meters and put in a new ones by brute force and with no advance notice in our neighborhood, so my computer went down in power. So I went to Windows 7's drive C and schedule a Disk Cleanup (chkdsk) on the next boot up. When it boots up, it says A disk check has been scheduled To skip disk checking, press any key within __ second(s). and then after it shows To skip disk checking, press any key within 1 second(s). it just sits there, with no further message. the hard disc light blinks every 5 seconds. So what is to be done now? I certainly don't want to brute force power off again.

    Read the article

  • SOA Management in 3 minutes - Video explainer

    - by J Swaroop
    Today’s CIOs and IT executives face challenges that take valuable time away from more strategic business objectives. They have to keep their systems running 24/7, manage increasingly complex applications, and more as part of their SOA environment. Watch this quick 3 minute video explainer to learn how Oracle EM Management Pack Plus for SOA is engineered to deliver value right out of the box with a fully centralized management console - with a rich set of service and system level dashboards, administrators can view service levels for key business processes and SOA infrastructure components from a central location. Watch the 3 minute video explainer

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >