Search Results

Search found 27151 results on 1087 pages for 'end to end'.

Page 585/1087 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • Microsoft Silverlight Analytics Framework - Day 2 Part 2 of MIX 2010

    - by GeekAgilistMercenary
    I went to the session on Microsoft Silverlight Analytics Framework (MSAF) today while here at MIX 2010.  It was a great walk through the features, ideas, and what the end goal is.  Michael Scherotter did a great job of lining up the ideas, intentions, and the functional ideas behind the framework. The framework is built around the Silverlight Behaviors.  If you aren't sure what behaviors are, check out these entries from Nikhilk.net Silverlight Behaviors, Silverlight 3 Drag Behavior, An Introduction to Behaviors, Triggers, and Actions, and of course the MSDN Documentation on the matter. Some of the key features of the framework is to support out-of-browser scenarios, which works perfectly with out Webtrends DX Web Services.  Offline scenarios, which again, we have worked toward supporting at Webtrends DC Web Services via caching and other criteria as needed.  Another feature that I was really stoked about is the Microsoft Expression Blend integration that removes the need for coding, thus simplifying the addition of analytics components based on events or other actions within a Silverlight Application.  This framework also easily supports A/B Testing (again, something we do quit a bit of at Webtrends with Webtrends Optimize. The last thing I really wanted to point out was the control support that this framework has support in already from Telerik RadControls, Smooth Streaming Media Element, and Microsoft Silverlight Media Framework Player 1.0.  These are implemented with behaviors and handlers exposed via MEF (Managed Extensibility Framework). All in all, great second day, great analytics framework for Silverlight, and great presentation.  Until tomorrow, adieu. For this original entry and all sorts of other technical gibberish I write, check out my other blog Agilist Mercenary.

    Read the article

  • What are the licensing issues involved in the Oracle/Apache java dispute?

    - by Chris Knight
    I've just started following with interest the soap opera involving Oracle's acquisition of Java and the detriment of goodwill it seems to have generated in the open source community. Specifically, I'm now trying to get my head around the implications of Oracle's decision to refuse Apache an open source license for Harmony. My questions: 1) What is Harmony anyway? Their website states "Apache Harmony software is a modular Java runtime with class libraries and associated tools". How is this different than J2SE or J2EE? Or is Harmony akin to Andriod? 2) The crux of this issue is around the Java Technology Compatibility Kit (or TCK) which certifies that your implementation adheres to the JSR specifications. If I understand correctly, Oracle refuse to offer free or open source license access to the TCK, denying projects like Harmony from being released as open source. Why is this such a big deal for Apache? E.g. why can't (or don't) they release Harmony under a restricted license? 3) From this site is the following quote: It looks like Oracle’s plan is to restrict deployments of Java implementations in certain markets, particularly on mobile platforms, so that it can monetize its own Java offering in those markets without any competition. Presumably anything Oracle produced would be subject to the same restrictions it is imposing on others with respect to end-technology licensing, so how could they get a leg up on the competition? While no doubt distateful, wouldn't other competitors such as Google or Apache be able to release competing platforms under the same license as Oracle?

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • Releasing an open source project without getting embarrassed

    - by Hopeful
    I've been working by myself on a fairly large open source project for quite a while and it's nearing the point where I'd like to release it. However, I'm self-taught and I don't really know anyone who could adequately review my project. A few years ago, I had released a small bit of code which pretty much got ripped apart (in a critical sense) on the forum where I released it. Even though the code worked, the criticism was accurate but brutal. It prompted me to begin searching for best practices for everything and in the end I feel that it made me a much better developer. I've gone over everything in my project so many times trying to make it perfect that I've lost count. I believe in my project and think it has the potential to help a lot of people and I feel like I've done some cool things in interesting ways with it. Still, because I'm self-taught, I can't help but wonder what gaps exist in my self-education. The way my code was ripped apart last time isn't something I'd like to repeat. I think my two biggest fears with releasing my project that I've poured countless hours into are being absolutely embarrassed because I missed some patently obvious things because of my self-education or, worse, releasing it to the sound of crickets. Is there anyone who has been in a similar situation? I'm not afraid of constructive criticism, so long as it is constructive and not just a rant on how I screwed up. I know there is a code review site on StackExchange, but it's not really set up for large projects and I didn't feel like the community there is large enough yet to get good feedback if I were to post parts of my project piecemeal (I tried with one file). What can I do to give my project at least some measure of success without getting embarrassed or devestated in the process?

    Read the article

  • asp.net mvc vs angular.js model binding

    - by aw04
    So I've noticed a trend lately of .net web developers using angular.js on the client side of applications and I've become more curious as I play around with angular and compare it to how I would do things in asp.net mvc. I'll give a quick example of what really got me thinking. I recently came across a situation at work (I work in a .net environment) where I needed to create a table bound to a collection of objects that had the ability to add and remove rows/items from the collection. I had an add button that created a new object and appended a row to the end of the table, and a remove button in each row to remove a particular object/row. Using asp.net mvc, I first found myself making an ajax call to the server for each operation, updating the server side model, and refreshing part of the page to show the result in the table. This worked but I didn't really like the idea of calling the server to update the model each time, so I tried to come up with a solution to do this on the client side. It turned out to be quite a task, as I had to generate the html on add with validation and all and the correct indexing for the model binding to work. It got worse on remove, as I ended up with a crazy string replace function to recreate the indexes on each item to satisfy the binding requirements (if an item other than the last is removed, the indexes are no longer correct). Now out of curiosity, I tried to recreate this at home in angular (which I had no experience with) and it took me all of about 10 minutes with simple functions to add and remove items from the client side model. This is just one example, but it seems to me that I'm able to achieve the same results with far fewer calls to the server in angular because of the fact that it binds to a client side model. So my question is, is this a distinct advantage of using a javascript mvc framework or am I somehow under utilizing the power of asp.net mvc and am I right in thinking that these operations should be done on the client and have no business requiring calls to the server?

    Read the article

  • The problems with Avoiding Smurf Naming classes with namespaces

    - by Daniel Koverman
    I pulled the term smurf naming from here (number 21). To save anyone not familiar the trouble, Smurf naming is the act of prefixing a bunch of related classes, variables, etc with a common prefix so you end up with "a SmurfAccountView passes a SmurfAccountDTO to the SmurfAccountController", etc. The solution I've generally heard to this is to make a smurf namespace and drop the smurf prefixes. This has generally served me well, but I'm running into two problems. I'm working with a library with a Configuration class. It could have been called WartmongerConfiguration but it's in the Wartmonger namespace, so it's just called Configuration. I likewise have a Configuration class which could be called SmurfConfiguration, but it is in the Smurf namespace so that would be redundant. There are places in my code where Smurf.Configuration appears alongside Wartmonger.Configuration and typing out fully qualified names is clunky and makes the code less readable. It would be nicer to deal with a SmurfConfiguration and (if it was my code and not a library) WartmongerConfiguration. I have a class called Service in my Smurf namespace which could have been called SmurfService. Service is a facade on top of a complex Smurf library which runs Smurf jobs. SmurfService seems like a better name because Service without the Smurf prefix is so incredibly generic. I can accept that SmurfService was already a generic, useless name and taking away smurf merely made this more apparent. But it could have been named Runner, Launcher, etc and it would still "feel better" to me as SmurfLauncher because I don't know what a Launcher does, but I know what a SmurfLauncher does. You could argue that what a Smurf.Launcher does should be just as apparent as a Smurf.SmurfLauncher, but I could see `Smurf.Launcher being some kind of class related to setup rather than a class that launches smurfs. If there is an open and shut way to deal with either of these that would be great. If not, what are some common practices to mitigate their annoyance?

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • Sponsor the Hottest .NET Community Event in Germany: dotnet Cologne 2011

    - by WeigeltRo
    The “dotnet Cologne” conference organized by the NET usergroups Bonn and Cologne quickly has become the .NET community event in Germany. So when we opened the registration for dotnet Cologne 2011 on Monday, we expected some interest. But we didn’t expect the 200 “early bird” seats to be gone in less than three hours! And the registrations at normal price keep coming in, so it looks like this event will sell out even earlier than last year. In December I wrote about sponsorship opportunities at the dotnet Cologne 2011 – and why it’s a good idea to be a sponsor at this particular conference. If you are interested in becoming a sponsor: We still offer a wide variety of sponsorship packages in different sizes. At our new, larger, event location, we still have space for exhibition booths. Last year’s exhibitors were very happy and had many interesting conversations with the attendees. And this year we planned for longer breaks between sessions, which means event more time for presenting your products. And yes, German developers understand English demos. But maybe a booth is a bit too much for you. With the Bronze package, you can make sure the attendees receive promotional material of your company in their bags – for a fraction of what you’d pay at a commercial conference. Or you could sponsor a couple of licenses of your product for the raffle at the end of the day. If you want to learn more, just send an email to Roland.Weigelt at dotnet-koelnbonn.de and I’ll send you our sponsor information.

    Read the article

  • External USB Drive

    - by ErocM
    I have a server that I hooked up an external USB drive. It was formatted in windows and has files on it already. I'm new to Ubuntu so please be patient... I have two questions: Will Ubuntu see the drive since it was formatted in windows? How do I mount this drive or rather, how do I know it's seen by Ubuntu? Thanks! I did fdisk -l and this is what I have but I don't see it. It's a 1tb drive: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001eb47 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 625141759 312320001 5 Extended /dev/sda5 501760 625141759 312320000 8e Linux LVM Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/mapper/ubuntu-root: 316.6 GB, 316577677312 bytes 255 heads, 63 sectors/track, 38488 cylinders, total 618315776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu-swap_1: 3217 MB, 3217031168 bytes 255 heads, 63 sectors/track, 391 cylinders, total 6283264 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 this is an external USB hard drive not a thumb drive :)

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • How do I properly use String literals for loading content?

    - by Dave Voyles
    I've been using verbatim string literals for some time now, and never quite thought about using a regular string literal until I started to come across them in Microsoft provided XNA samples. With that said, I'm trying to implement a new AudioManager class from the Net Rumble sample. I have two (2) issues here: Question 1: In my code for my GameplayScreen screen I have a folder location written as the following, and it works fine: menuButton = content.Load<SoundEffect>(@"sfx/menuButton"); menuClose = content.Load<SoundEffect>(@"sfx/menuClose"); If you notice, you'll see that I'm using a verbatim string, with a forward slash "/". In the AudioManager class, they use a regular string literal, but with two backslashes "\". I understand that one is used as an escape, but why are they BACK instead of FORWARD? (See below) soundList[soundName] = game.Content.Load<SoundEffect>("audio\\wav\\"+ soundName); Question 2: I seem to be doing everything correctly in my AudioManager class, but I'm not sure of what this line means: audioFileList = audioFolder.GetFiles("*.xnb"); I suppose that the *xnb means look for everything BUT files that end in *xnb? I can't figure out what I'm doing wrong with my file locations, as the sound effects are not playing. My code is not much different from what I've linked to above. private AudioManager(Game game, DirectoryInfo audioDirectory) : base(game) { try { audioFolder = audioDirectory; audioFileList = audioFolder.GetFiles("*.mp3"); soundList = new Dictionary<string, SoundEffect>(); for (int i = 0; i < audioFileList.Length; i++) { string soundName = Path.GetFileNameWithoutExtension(audioFileList[i].Name); soundList[soundName] = game.Content.Load<SoundEffect>(@"sfx\" + soundName); soundList[soundName].Name = soundName; } // Plays this track while the GameplayScreen is active soundtrack = game.Content.Load<Song>("boomer"); } catch (NoAudioHardwareException) { // silently fall back to silence } }

    Read the article

  • Flame Experiments Aboard the ISS Yield Surprising Results

    - by Jason Fitzpatrick
    Recent flame-based experiments aboard the International Space Station yielded results scientists simply thought couldn’t happen–combustion in microgravity is a curious thing. Smithsonian magazine reports on the findings: Here on Earth, when a flame burns, it heats the surrounding atmosphere, causing the air to expand and become less dense. The pull of gravity draws colder, denser air down to the base of the flame, displacing the hot air, which rises. This convection process feeds fresh oxygen to the fire, which burns until it runs out of fuel. The upward flow of air is what gives a flame its teardrop shape and causes it to flicker. But odd things happen in space, where gravity loses its grip on solids, liquids and gases. Without gravity, hot air expands but doesn’t move upward. The flame persists because of the diffusion of oxygen, with random oxygen molecules drifting into the fire. Absent the upward flow of hot air, fires in microgravity are dome-shaped or spherical—and sluggish, thanks to meager oxygen flow. “If you ignite a piece of paper in microgravity, the fire will just slowly creep along from one end to the other,” says Dietrich. “Astronauts are all very excited to do our experiments because space fires really do look quite alien.” Hit up the link below for the full article including how NASA is applying the findings. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Important additions to the Silverlight MaskedTextBox

    RadMaskedTextBox is one of the major controls in the Telerik Silverlight suite. It enables you to filter the user input and makes the work with data much more easier for the end-user. That is why in the past quarter (Q1) we put a great effort and get all the scenarios and users reports that we had so far and made this control as stable as possible. Now post Q1 we added some small, but important new features to the control. Here they are: Option to get the changed value when the focus of the control is lost. Before this change each user stroke causes the ValueChanged and ValueChanging events to be raised. Now you have to option to get these events on lost focus. This also gives you the option to enable the Regular expression validation of the new value on ValueChanging event. So if you want a complex ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL – Download NuoDB and Qualify for FREE Amazon Gift Cards

    - by Pinal Dave
    July has been a fantastic month and Team NuoDB has really appreciated the active participation of the SQLAuthority.com active reader base. Earlier we had launched two contests with NuoDB and both of them are very much appreciated by readers. There are constant demands of more contests and team NuoDB is very much excited to support more contests. Here are the details to constests ran earlier: What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards Based on the earlier successful contests, the kind folks at NuoDB decided that they will support one more round of the giveaways to SQLAuthority.com contests. However, please note that this month’s contest will end in next 48 hours. You have to take part before July 31st, 2013 11:59:00 PM PST. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get Amzon USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. To eligible for this contest, please download NuoDB before July 31st, 2013 11:59:00 PM PST. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with the note about how many different platform NuoDB supports. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • EPM 11.1.2.2 Architecture: Essbase

    - by Marc Schumacher
    Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Creating packages in code - Workflow

    This is just a quick one prompted by a question on the SSIS Forum, how to programmatically add a precedence constraint (aka workflow) between two tasks. To keep the code simple I’ve actually used two Sequence containers which are often used as anchor points for a constraint. Very often this is when you have task that you wish to conditionally execute based on an expression. If it the first or only task in the package you need somewhere to anchor the constraint too, so you can then set the expression on it and control the flow of execution. Anyway, back to my code sample, here’s a quick screenshot of the finished article: Now for the code, which is actually pretty simple and hopefully the comments should explain exactly what is going on. Package package = new Package(); package.Name = "SequenceWorkflow"; // Add the two sequence containers to provide anchor points for the constraint // If you use tasks, it follows exactly the same pattern, they all derive from Executable Sequence sequence1 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence1.Name = "SEQ Start"; Sequence sequence2 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence2.Name = "SEQ End"; // Add the precedence constraint, here we use the package's constraint collection // as it hosts the two objects we want to constrain (link) // The default constraint is a basic On Success constraint just like in the designer PrecedenceConstraint constraint = package.PrecedenceConstraints.Add(sequence1, sequence2); // Change the settings to use a (dummy) expression only constraint.EvalOp = DTSPrecedenceEvalOp.Expression; constraint.Expression = "1 == 1";   The complete code file is available to download below. SequenceWorkflow.cs

    Read the article

  • Demonstrate bad code to client?

    - by jtiger
    I have a new client that has asked me to do a redesign of their website, an ASP.NET Webforms application that was developed by another consultant. It seemed straight-forward (it never is) but I took a look at the code to make sure I knew what I was in for. This application was not written well. At all. It is extremely vulnerable to SQL Injection attacks, business logic is spread throughout the entire application, a lot of duplication, and dead end code that does nothing. On top of that, it keeps throwing exceptions that are being smothered, so it all appears to be running smoothly. My job is to simply update the html and css, but much of the html is being generated in business logic and would be a nightmare for me to sort everything out. My estimates on the redesign were longer than the client was aiming for, and they are asking why so long. How can I explain to my client just how bad this code is? In their mind, the application is running great and the redesign should be a quick one-off. It's my word against the previous consultant, so how can I actually give simple, concrete examples that a non-technical client would understand?

    Read the article

  • How can we plan projects realistically while accounting for support issues?

    - by Thomas Clayson
    We're having a problem at work: we're trying to schedule work so that we can assess time scales and get deadline dates. The problem is that it's difficult to plan for a project without knowing everything that's going to happen. For instance, right now we've planned all our projects through the start of December, however in that time we will have various in house and external meetings, teleconferences and extra work. It's all well and good to say that a project will take three weeks, but if there is a week's worth of interruption in that time then the date of completion will be pushed back a week. The problem is 3 fold: When we schedule projects the time scales are taken literally. If we estimate three weeks, the deadline is set for three week's time, the client is told, and there is no room for extension. Interim work and such means that we lose productive time working on the project. Sometimes clients don't have the time that we need to take to do the work, so they'll sometimes come to us and say they need a project done by the end of the month even when we think that the work will take two months - not to mention we already have work to be doing. We have a Gantt chart which we are trying to fill in with all the projects we have and we fill in timesheets, but they're not compared to the Gantt chart at all. This makes it difficult to say "Well, we scheduled 3 weeks for this project, but we've lost a week here so the deadline has to move back a week." It's also not professional to keep missing deadlines we've communicated to the client. How do other people deal with this type of situation? How do you manage the planning of projects? How much "extra" time do you schedule into a project to account for non-project work that occurs during a project? How do you deal with support issues and bugs and stuff? Things you can't account for during planning? UPDATE Lots of good answers thank you.

    Read the article

  • Blank screen after Switch User or Resume

    - by matt wilkie
    About half the time when I switch users or resume from standby or resume the screen goes blank (black). If I work the cursor keys I can hear the system bell when it gets to the end of the user list. I can also successfully login, going from memory, but screen stays black. Sometimes closing and re-opening the lid will light up the screen again. Pressing the special Function key to enable/disable external monitor connection has no effect [Fn]-[F5],[Fn]-[F6]. If none of the previous work I need to put the computer into hibernation or full power off to restore screen function. If I watch closely when switching users I think I can see the screen initially start to light up and then quickly fade to black. The computer is an Acer Aspire 3500, model ZL6, running Ubuntu 10.10 installed 2 days ago. No proprietary drivers are in use. I'll provide a list of hardware details as soon as I can figure out how to generate that (didn't there used to be an entry for hardware details under the System menu?). Possibly related questions: No resume after Hibernate or Standby When I resume from suspension - the screen is blank Switch user fails to complete successfully For what it's worth, blank after resume also used to happen occasionally when the laptop was running XP-Home, but nowhere near as often, perhaps 6 or 8 times a year. UPDATE: I found System Administration System Testing and ran the Monitor test. It went very very dark, but the window elements could be discerned, and the whole screen flashed (from very very dark to black). On the third repeat of that same test the screen went to full blaupck and stayed there. Moving the mouse, via touchpad, or touch keys did not wake it up again. I had to close the lid and put the computer into hibernate, and press the power button to restore it. UPDATE2: output of lshw: http://pastebin.com/q7n8676r, lspci: http://pastebin.com/6ujzVK4r UPDATE3: sometimes I can restore the screen by flipping to console 1 with ctrl-alt-F1 and then back to graphical with ctrl-alt-F7.

    Read the article

  • How to Reach German Developers: dotnet Cologne 2011

    - by WeigeltRo
    If you want to promote tools, technologies, libraries, trainings or anything else of interest to software developers, you want to reach the right audience. Not the 9-to-5 people, but those who have the knowledge and passion that make them important multipliers. A great way to reach these people are community conferences. They are not the kind of conference that the 9-to-5 folks are “sent to” by their company, but that the right people hear of via Twitter, Facebook, blogs or plain old word-of-mouth and choose to go to, often covering the costs for the day themselves (travel, entrance fee, hotel, taking the day off). If you want to reach German developers there is one conference that has emerged as the large .NET community conference in Germany, quickly growing beyond being just a local event: The dotnet Cologne, that will be held for the third time on May 6, 2011 and that I’m co-organizing (this interview gives you a good idea of the history). This year’s dotnet Cologne 2010 with its 300 attendees was a huge success. As in the year before, the conference was sold out weeks in advance, and feedback by attendees and sponsors was positive throughout. And the list of speakers and attendees sounded like a “who is who” of the German .NET community. Whether you‘d like to present a product, a service or your company: you will meet the right target audience at dotnet Cologne. We’re offering a broad variety of sponsorship opportunities, ranging from being a donor for the large raffle at the end of the day (software licenses, books, training vouchers, etc.) up to having a booth and/or giving a sponsored talk about your product (not necessarily in German, English is not a problem). Among the various sponsorship levels (bronze/silver/gold/platinum) there’s most likely a package that will suit your needs – and if not, we’re open for suggestions. We’re happy to announce that already at this point in time (with over five months to go) we have a steadily growing list of partners: Microsoft, Intel, IDesign, SubMain, Comma Soft AG, GFU Köln, and EC Software. If you want to become a sponsor for the dotnet Cologne 2011, drop me a line at Roland.Weigelt at dotnet-koelnbonn.de and I’ll send you our sponsor info.

    Read the article

  • Render on other render targets starting from one already rendered on

    - by JTulip
    I have to perform a double pass convolution on a texture that is actually the color attachment of another render target, and store it in the color attachment of ANOTHER render target. This must be done multiple time, but using the same texture as starting point What I do now is (a bit abstracted, but what I have abstract is guaranteed to work singularly) renderOnRT(firstTarget); // This is working. for each other RT currRT{ glBindFramebuffer(GL_FRAMEBUFFER, currRT.frameBufferID); programX.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, firstTarget.colorAttachmentID); programX.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, firstTarget.depthAttachmentID); programX.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); // quadBuffID is a VBO for a screen aligned quad. It is fine. programX.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS,0,4); programY.use(); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, currRT.colorAttachmentID); // The second pass is done on the previous pass programY.setUniform1i("colourTexture",0); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, currRT.depthAttachmentID); programY.setUniform1i("depthTexture",1); glBindBuffer(GL_ARRAY_BUFFER, quadBuffID); programY.vertexAttribPointer(POSITION_ATTRIBUTE, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_QUADS, 0, 4); } The problem is that I end up with black textures and not the wanted result. The GLSL programs program(X,Y) works fine, already tested on single targets. Is there something stupid I am missing? Even an hint is much appreciated, thanks!

    Read the article

  • Oracle Exadata X3 announcement at Oracle Openworld

    - by Javier Puerta
    Oracle Announces Oracle Exadata X3 Database In-Memory MachineOracle Press ReleaseFourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Detailed info at Oracle Exadata Database Machine

    Read the article

  • C#/XNA get hardware mouse position

    - by Sunder
    I'm using C# and trying to get hardware mouse position. First thing I tryed was simple XNA functionality that is simple to use Vector2 position = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); After that i do the drawing of mouse as well, and comparing to windows hardware mouse, this new mouse with xna provided coordinates is "slacking off". By that i mean, that it is behind by few frames. For example if game is runing at 600 fps, of curse it will be responsive, but at 60 fps software mouse delay is no longer acceptable. Therefore I tried using what I thought was a hardware mouse, [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool GetCursorPos(out POINT lpPoint); but the result was exactly the same. I also tried geting Windows form cursor, and that was a dead end as well - worked, but with the same delay. Messing around with xna functionality: GraphicsDeviceManager.SynchronizeWithVerticalRetrace = true/false Game.IsFixedTimeStep = true/fale did change the delay time for somewhat obvious reasons, but the bottom line is that regardless it still was behind default Windows mouse. I'v seen in some games, that they provide option for hardware acelerated mouse, and in others(I think) it is already by default. Can anyone give some lead on how to achieve that.

    Read the article

  • I can't upgrade to 12.10 Quantal Quetzal

    - by JamesTheAwesomeDude
    After hearing about some of the features in 12.10, I decided to upgrade my Ubuntu from 12.04 LTS to the newer 12.10. However, when I try to upgrade it, I start the update manager, like always, and I start the distro upgrade, just like I do every time there's a new version, but at the end of the "Setting new software channels" stage, it fails, saying: Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: E: Unable to correct problems, you have held broken packages. This can be caused by: Upgrading to a pre-release version of Ubuntu Running the current pre-release version of Ubuntu Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. What should I do? I understand that I have some sort of "broken" package on my system, but how do I identify and remove the problem? Note: At the start of the "Setting new software channels" stage, I was presented with this: Third party sources disabled Some third party entries in your sources.list were disabled. You can re-enable them after the upgrade with the 'software-properties' tool or your package manager.

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >