Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 776/1300 | < Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >

  • Learn how Oracle storage efficiencies can help your budget

    - by jenny.gelhausen
    Mark Your Calendar! Live Webcast: Next Generation Storage Management Solutions Wednesday, March 24th, 2010 at 9:00am PT or your local time Please plan to join us for this webcast where Forrester senior analyst Andrew Reichman will discuss the pillars of storage efficiency, how to measure and improve it, and how this can help your business immediately alleviate budget pressures. Joining Mr. Reichman are Phil Stephenson, Senior Principal Product Manager at Oracle, and Matthew Baier, Oracle Product Director, who will explain to you the next generation storage capabilities available in Oracle Database 11g and Oracle Exadata. Register for this March 24th live wecast today! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Oracle Supply Chain builds momentum in the Press

    - by [email protected]
    SCM coverage in early '10 was dominated by major product announcements. The release of Oracle Global Trade Management and Oracle Transportation Management 6.1 garnered ten unique articles. SearchOracle.com and Supply Chain Management Review primarily focused on the compliance aspect of the announcement while Managing Automation concentrated on the new trade management capabilities. Elsewhere, there was a lot of interest around the new 'Green Dashboard' as reported by Modern Materials Handling, Environmental Leader and TMCnet. Other SCM news included the announced integration of Oracle Hyperion Planning and Demantra S&OP as reported by Database Trends and Applications and Treasury & Risk.

    Read the article

  • Ways to use your skills as a developer to give back to the community/charities.

    - by Ryan Hayes
    Recently I came upon a community event called GiveCamp. GiveCamp is a weekend-long event where technology professionals from designers, developers and database administrators to marketers and web strategists donate their time to provide solutions for non-profit organizations. Since its inception in 2007, the GiveCamp program has provided benefits to over 150 charities, with a value of developer and designer time exceeding $1,000,000 in services! Coming from a very rural part of the country where there is a huge opportunity for charity events like this, it got me wondering. Are there other large movements like GiveCamp that are out there? GiveCamp is sponsored by Microsoft, so of course most are run through .NET user groups. Are there other flavors of it? Different types? Java/Python/other open source charity movements? If not, how do you give back?

    Read the article

  • I'm a happy camper

    - by Paul Nielsen
    I’m satisfied with SQL Server 2008. It meets my needs and I can build whatever I want in the database. There’s no feature that it lacks that blocking my development. SQL Server 2008 is what I see when I close my eyes and dream. Maybe I’m just pleased with my current projects, maybe I’m being dense, maybe I lack imagination, tell me if I’m being stupid, but I feel no compelling need for an upgrade. If Microsoft didn’t ship the new version for 3 or 5 more years, I’d be ok with that. I’m sure there...(read more)

    Read the article

  • Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21

    - by Pinal Dave
    Data is forever. Think about it – it is indeed true. Are you using any application as it is which was built 10 years ago? Are you using any piece of hardware which was built 10 years ago? The answer is most certainly No. However, if I ask you – are you using any data which were captured 50 years ago, the answer is most certainly Yes. For example, look at the history of our nation. I am from India and we have documented history which goes back as over 1000s of year. Well, just look at our birthday data – atleast we are using it till today. Data never gets old and it is going to stay there forever.  Application which interprets and analysis data got changed but the data remained in its purest format in most cases. As organizations have grown the data associated with them also grew exponentially and today there are lots of complexity to their data. Most of the big organizations have data in multiple applications and in different formats. The data is also spread out so much that it is hard to categorize with a single algorithm or logic. The mobile revolution which we are experimenting right now has completely changed how we capture the data and build intelligent systems.  Big organizations are indeed facing challenges to keep all the data on a platform which give them a  single consistent view of their data. This unique challenge to make sense of all the data coming in from different sources and deriving the useful actionable information out of is the revolution Big Data world is facing. Defining Big Data The 3Vs that define Big Data are Variety, Velocity and Volume. Volume We currently see the exponential growth in the data storage as the data is now more than text data. We can find data in the format of videos, musics and large images on our social media channels. It is very common to have Terabytes and Petabytes of the storage system for enterprises. As the database grows the applications and architecture built to support the data needs to be reevaluated quite often. Sometimes the same data is re-evaluated with multiple angles and even though the original data is the same the new found intelligence creates explosion of the data. The big volume indeed represents Big Data. Velocity The data growth and social media explosion have changed how we look at the data. There was a time when we used to believe that data of yesterday is recent. The matter of the fact newspapers is still following that logic. However, news channels and radios have changed how fast we receive the news. Today, people reply on social media to update them with the latest happening. On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data. Variety Data can be stored in multiple format. For example database, excel, csv, access or for the matter of the fact, it can be stored in a simple text file. Sometimes the data is not even in the traditional format as we assume, it may be in the form of video, SMS, pdf or something we might have not thought about it. It is the need of the organization to arrange it and make it meaningful. It will be easy to do so if we have data in the same format, however it is not the case most of the time. The real world have data in many different formats and that is the challenge we need to overcome with the Big Data. This variety of the data represent  represent Big Data. Big Data in Simple Words Big Data is not just about lots of data, it is actually a concept providing an opportunity to find new insight into your existing data as well guidelines to capture and analysis your future data. It makes any business more agile and robust so it can adapt and overcome business challenges. Tomorrow In tomorrow’s blog post we will try to answer discuss Evolution of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Larry Ellison and Mark Hurd on Oracle Cloud

    - by arungupta
    Oracle Cloud provides Java and Database as Platform Services and Customer Relationship Management, Human Capital Management, and Social Network as Application Services. Watch a live webcast with Larry Ellison and Mark Hurd on announcements about Oracle Cloud. Date ? Wednesday, June 06, 2012 Time ? 1:00 p.m. PT – 2:30 p.m. PT Register here for the webinar. You can also attend the live event by registering here. Oracle Cloud is by invitation only at this time and you can register for access here.

    Read the article

  • SQLAuthority News – Technical Review of Learning at Koenig Solutions

    - by pinaldave
    Yesterday I finished my 3 days fast track in person learning of course End to End SQL Server Business Intelligence at Koenig Solutions. You can read my previous article over here regarding why am I learning SQL Server. Yesterday I blogged about my experience of arriving to Training Center and my induction with the center. The Training Days I had enrolled for three days training so my routine each of the three days was very much same. However, the content every day was different as I was learning something new every day. Let me describe a few of the interesting details of my daily routine. A Single Student Batch The best part of my training was that in my training batch, I am single student. Koenig is known to smaller batches and often they have single student batches as well. I was very much delighted to know that I will have dedicated access and attention from my trainer in my batch as I will be single student in my batch. In most of the labs I have observed there are no more than 4 students at any time. Prakash and Pinal 7:30 AM Breakfast Talk We all students gather at 7:30 in breakfast area. The best time of the day. I was the only Indian student in the group. The other students were from USA, Canada, Nigeria, Bhutan, Tanzania, and a few others from other countries. I immediately become the source of information and reference manual. Though the distance between Delhi and Bangalore is 2000+ KM I was considered as a local guy. 8:30 AMHeading to Training Center Every day without fail at 8:30 the van started from our accommodation to the training center. As mentioned in an earlier blog post the distance is about 5 minutes and we were able to reach at the location before 8:45. This gave us some time settle in before our class starts at 9:00 AM. 9:00 AM Order Lunch Food Well it may sound funny that we just had breakfast 30 minutes but the first thing everybody has to do is to order lunch as soon as the class starts. There is an online training portal to order food for the day. Everybody has to place their order early during the day so the food arrives on time during lunch time. Everybody can order whatever they want to order using an online ordering system. The options are plenty and everybody can order what they like. 9:05 AM Learning Starts After deciding the lunch we started the learning. I was very fortunate to have a very experienced trainer - Prakash Chheatry. Though I have never met him before I have heard a lot about Prakash. He is known as the top most SQL Server Trainer in India. His student list contains some of the very well known SQL Server Experts of the world and few of SQL Server “best seller” book authors. Learning continues till 1:00 PM with one tea-coffee break in between. 1:00 PM Lunch The lunch time is again the fun time. We all students get together in the afternoon and tell the stories of the world. Indeed the best part of the day beside learning new stuff. 4:55 PM Ready to Return We stop at 4:55 as at precisely 5:00 PM the van stops by the institute which takes us back to our accommodation. Trust me seriously long long day always but the amount of the learning is the win of the day. 7:30 PM Dinner Time After coming back to the accommodation I study till 7:30 and then rush for dinner. Dinner is world cuisine and deserts are really delicious. After dinner every day I have written a blog and retired early as the next day is always going to be busier than the present day. What did I learn As I mentioned earlier I know SQL Server fairly well. I had expressed the same in my conversation as well. This is the reason I was assigned a fairly senior trainer and we learned everything quite quickly. As I know quite a few things we went pretty fast in many topics. There were a few things, I wanted to learn in detail as well practice on the labs. We slowed down where we wanted and rush through the concepts where I was very comfortable. Here is the list of the things which we covered in action pack three days. Introduction to Business Intelligence (Intro) SQL Server Analysis Service (Theory and Lab) SQL Server Integration Service  (Theory and Lab) SQL Server Reporting Service  (Theory and Lab) SQL Server PowerPivot (Lab) UDM (Theory) SharePoint Concepts (Theory) Power View (Demo) Business Intelligence and Security (Discussion) Well, I was delighted that I was able to refresh lots of concepts during these three days. Thanks to my trainer and my friend who helped me to have a good learning experience. I believe all the learning  will help me in my growth and future career. With this I end my this experience. I am planning to have another online learning experience later this month. I will blog about my experience as I begin it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Application Lifecycle Management with Visual Studio 2010 – Wrox Book

    - by Guy Harwood
    After running with a somewhat disconnected set of tools (vs 2008, Ontime, sharepoint 2007) for managing our projects we decided to make the move to Team Foundation Server 2010.  With limited coverage of the product available online i went in search of a book and found this… View this book on the Wrox website I must point out that i have only read 10 of the 26 chapters so far, mainly the ones that cover source code control, work item tracking and database projects.  This enables our dev team to get familiar with it before switching project management over at a future date. Needless to say i am very impressed with the detail it provides, answering pretty much every question i had about TFS so far.  I'm looking forward to digging into the sections on testing, code analysis and architecture. Highly recommended.

    Read the article

  • Microsoft Townhall, An Example for Azure and MVC

    - by Shaun
    Microsoft just released an example named Microsoft Townhall which was built and deployed on Azure. It uses ASP.NET MVC as its webiste framework and the SQL Azure plus LinqToSQL as its the database and the ORM framework. You can download the source code at the MSDN Code Gallery. Basides the Azure it might be more useful to us to learn how they utilized ASP.NET MVC. Just a very quickly review I found it utilized the Enterprise Library Unity as the main IoC container for controllers, services and repositories and customized a lot of ModelBinders, Filters, etc.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Editing sqlcmdvariable nodes in SSDT Publish Profile files using msbuild

    - by jamiet
    Publish profile files are a new feature of SSDT database projects that enable you to package up all environment-specific properties into a single file for use at publish time; I have written about them before at Publish Profile Files in SQL Server Data Tools (SSDT) and if it wasn’t obvious from that blog post, I’m a big fan! As I have used Publish Profile files more and more I have realised that there may be times when you need to edit those Publish profile files during your build process, you may think of such an operation as a kind of pre-processor step. In my case I have a sqlcmd variable called DeployTag, it holds a value representing the current build number that later gets inserted into a table using a Post-Deployment script (that’s a technique that I wrote about in Implementing SQL Server solutions using Visual Studio 2010 Database Projects – a compendium of project experiences – search for “Putting a build number into the DB”). Here are the contents of my Publish Profile file (simplified for demo purposes) : Notice that DeployTag defaults to “UNKNOWN”. On my current project we are using msbuild scripts to control what gets built and what I want to do is take the build number from our build engine and edit the Publish profile files accordingly. Here is the pertinent portion of the the msbuild script I came up with to do that:   <ItemGroup>     <Namespaces Include="myns">       <Prefix>myns</Prefix>       <Uri>http://schemas.microsoft.com/developer/msbuild/2003</Uri>     </Namespaces>   </ItemGroup>   <Target Name="UpdateBuildNumber">     <ItemGroup>       <SSDTPublishFiles Include="$(DESTINATION)\**\$(CONFIGURATION)\**\*.publish.xml" />     </ItemGroup>     <MSBuild.ExtensionPack.Xml.XmlFile Condition="%(SSDTPublishFiles.Identity) != ''"                                        TaskAction="UpdateElement"                                        File="%(SSDTPublishFiles.Identity)"                                        Namespaces="@(Namespaces)"                                         XPath="//myns:SqlCmdVariable[@Include='DeployTag']/myns:Value"                                         InnerText="$(BuildNumber)"/>   </Target> The important bits here are the definition of the namespace http://schemas.microsoft.com/developer/msbuild/2003: and the XPath expression //myns:SqlCmdVariable[@Include='DeployTag']/myns:Value: Some extra info: I use a fantastic tool called XMLPad to discover/test XPath expressions, read more at XMLPad – a new tool in my developer utility belt MSBuild.ExtensionPack.Xml.XmlFile is a msbuild task used to edit XML files and is available from Mike Fourie’s MSBuild Extension Pack I’m using a property called $(BuildNumber) to hold the value to substitute into the file and also $(DESTINATION)\**\$(CONFIGURATION)\**\*.publish.xml to define an ItemGroup all of my Publish Profile files. Populating those properties is basic msbuild stuff and is therefore outside the scope of this blog post however if you want to learn more check out MSBuild properties & How To: Use Wildcards to Build All Files in a Directory. Hope this is useful! @Jamiet

    Read the article

  • Oracle Fusion Middleware Innovation Award Winners 2012: ADF & Fusion Development

    - by Dana Singleterry
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Fusion Middleware Innovation Awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The awards were presented during Oracle OpenWorld 2012 and following winners are for the category of ADF & Fusion Development. Micros – an OPN Platinum partner – has been working closely with Oracle product management teams in applying industry best practices in the development of their solutions. Their current application suite for the hospitality industry was built on Oracle Forms and the Oracle database running on MS Windows. The next generation of this suite is being developed and released in modules that are now based on Oracle FMW (including ADF) 11g technologies and Oracle Database 11g all running on Oracle Linux. The primary driver was that of modernization and hence the reason Oracle ADF was selected to provide a rich UI for business processes that could be served up through traditional methods or through mobile devices globally. SOA Suite & ADF allowed for loosely-coupled services that could evolve with the needs of the business. Micros's application innovations includes the use of business application portlets that have been published from ADF Faces Task Flows generated using WebCenter portlet libraries  & Oracle Metadata Services (MDS) with multi-layered customizations using Oracle WebCenter Composer. PCS (Marfin Egnatia Bank of Greece) – PCS Wealth Management is a WM Software Solution, which captures and automates the WM business processes allowing Service Providers to allocate enough time and effort into Customer Service and Investment Strategies, under Advisory or Execution-Only Services. The Product is built upon the latest Web Technologies and ensures Best Practices covering all functional expectations, meeting local regulatory requirements and discovering successful opportunities for the WM Customers' Portfolios. The new unified Wealth Management system offers an unparalleled User Interface taking full advantage of the user friendly ADF Faces Components to a great extent, all serving Private Banking purposes. The application offers a true Account Officer Cockpit with shallow navigation, one-click access to informed decisions and a perfect customer service. ADF Grids and Pivots, the Data Visualization Components, as well as the Calendar and Map Components are cleverly used to help the user eliminate the usage of Excel, Outlook and other systems. PCS's application is unique in the way it leverages the ADF Faces data visualization components to create a truly attractive and insightful dashboard for their application. PCS Wealth Management Demo Qualcomm – Qualcomm, a $17B per year company, designs and sells semiconductor products for wireless telecommunications, mobile and computing markets. In addition, Qualcomm companies provide various hardware and software products to facilitate the design, development and deployment of phones and the applications that run on them. Qualcomm’s challenge has been to not only develop and deploy new business system functions to keep pace with customer demand, but also to provide a customer collaboration capability that is sufficiently robust, easy to use, and flexible to meet emerging and future needs. Qualcomm has taken successful steps in building and deploying the customer engagement platform Ieveraging various Oracle technologies including Fusion Middleware (ADF, SOA, OBIEE) and their proven ERP foundation of EBS and 11g databases. The new platform delivers a more unified and “seamless” business solution with a consistent, modern “look and feel” all based on standard business processes which facilitate efficient collaboration with Qualcomm and its customers. The look and feel leverages ADF in innovative ways and includes hover over navigation, custom pagination components, and skinning. Qualcomm has exposed a services layer that provides significant functionality including order-to-ship, quote-to-order, customer on-boarding and contract validation. Qualcomm's creative designs leverage Oracle's SOA Suite to integrate with Oracle EBS and desperate applications to provide a rich user interface through the use use of Oracle ADF Faces Rich Client Components providing a self-service solution to their customers.

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • SSIS Snack: Data Flow Source Adapters

    - by andyleonard
    Introduction Configuring a Source Adapter in a Data Flow Task couples (binds) the Data Flow to an external schema. This has implications for dynamic data loads. "Why Can't I...?" I'm often asked a question similar to the following: "I have 17 flat files with different schemas that I want to load to the same destination database - how many Data Flow Tasks do I need?" I reply "17 different schemas? That's easy, you need 17 Data Flow Tasks." In his book Microsoft SQL Server 2005 Integration Services...(read more)

    Read the article

  • Are There Any Examples of Uncle Bob's High-Falutin' Architecture?

    - by Jordan
    I just finished watching this presentation by Uncle Bob (as well as his "Architecture" section of his "Clean Code" videos), but I'm left wondering: Are there any examples out there of applications that implement this Entity-Boundary-Interactor (or Entity-Boundary-Controller) structure? At one point I downloaded the source code to FitNesse (the acceptance testing project he mentions often as an example of not only high test coverage but good architecture, since they were able to defer the decision to not use a database until the very end), and based on a quick glance of it it appears even this project doesn't seem to fit this pattern. Are there any nontrivial examples of this architecture out in the wild, or should I not bother even looking into it and chalk it up as "it would be great if you could get there, but nobody really does"?

    Read the article

  • Android -> Ruby Server Interface -> Mongodb

    - by MRabRabbit
    I've been wrecking my brain about this for a few days. I'll run my scenario by you and hopefully you can help me. In my head this is how it goes: I have an Android App. I want my Android App to make (function) calls to a MongoDB database via a Ruby Interface on the Server. e.g. Android app sends a HTTP GET ? with the function name, let's say getFriends for this user Ruby Interface receives this request from the app, grabs a thread from a thread pool and calls the appropriate function call implemented in Ruby, to the Mongodb. Ruby Interface gets results from Mongodb and sends a HTML POST to the Android app. So that's how I think it works. I know about the ruby driver for mongo db, and interacting with the mongodb from ruby but, how do I make a ruby back end listen for incoming messages and should these messages be done through sockets or a http interface ala Net::http in ruby?

    Read the article

  • SQL SERVER – Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2

    - by Pinal Dave
    This is the second part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 In part 1 we have understood what is incremental statistics and now in this second part we will see a simple example of incremental statistics. This blog post is heavily inspired from my friend Balmukund’s must read blog post. If you have partitioned table and lots of data, this feature can be specifically very useful. Prerequisite Here are two things you must know before you start with the demonstrations. AdventureWorks – For the demonstration purpose I have installed AdventureWorks 2012 as an AdventureWorks 2014 in this demonstration. Partitions – You should know how partition works with databases. Setup Script Here is the setup script for creating Partition Function, Scheme, and the Table. We will populate the table based on the SalesOrderDetails table from AdventureWorks. -- Use Database USE AdventureWorks2014 GO -- Create Partition Function CREATE PARTITION FUNCTION IncrStatFn (INT) AS RANGE LEFT FOR VALUES (44000, 54000, 64000, 74000) GO -- Create Partition Scheme CREATE PARTITION SCHEME IncrStatSch AS PARTITION [IncrStatFn] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY]) GO -- Create Table Incremental_Statistics CREATE TABLE [IncrStatTab]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [ModifiedDate] [datetime] NOT NULL) ON IncrStatSch(SalesOrderID) GO -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID < 54000 GO Check Details Now we will check details in the partition table IncrStatSch. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO You will notice that only a few of the partition are filled up with data and remaining all the partitions are empty. Now we will create statistics on the Table on the column SalesOrderID. However, here we will keep adding one more keyword which is INCREMENTAL = ON. Please note this is the new keyword and feature added in SQL Server 2014. It did not exist in earlier versions. -- Create Statistics CREATE STATISTICS IncrStat ON [IncrStatTab] (SalesOrderID) WITH FULLSCAN, INCREMENTAL = ON GO Now we have successfully created statistics let us check the statistical histogram of the table. Now let us once again populate the table with more data. This time the data are entered into a different partition than earlier populated partition. -- Populate Table INSERT INTO [IncrStatTab]([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate]) SELECT     [SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [OrderQty], [ProductID], [SpecialOfferID], [UnitPrice],   [UnitPriceDiscount], [ModifiedDate] FROM       [Sales].[SalesOrderDetail] WHERE      SalesOrderID > 54000 GO Let us check the status of the partition once again with following script. -- Check the partition SELECT * FROM sys.partitions WHERE OBJECT_ID = OBJECT_ID('IncrStatTab') GO Statistics Update Now here has the new feature come into action. Previously, if we have to update the statistics, we will have to FULLSCAN the entire table irrespective of which partition got the data. However, in SQL Server 2014 we can just specify which partition we want to update in terms of Statistics. Here is the script for the same. -- Update Statistics Manually UPDATE STATISTICS IncrStatTab (IncrStat) WITH RESAMPLE ON PARTITIONS(3, 4) GO Now let us check the statistics once again. -- Show Statistics DBCC SHOW_STATISTICS('IncrStatTab', IncrStat) WITH HISTOGRAM GO Upon examining statistics histogram, you will notice that now the distribution has changed and there is way more rows in the histogram. Summary The new feature of Incremental Statistics is indeed a boon for the scenario where there are partitions and statistics needs to be updated frequently on the partitions. In earlier version to update statistics one has to do FULLSCAN on the entire table which was wasting too many resources. With the new feature in SQL Server 2014, now only those partitions which are significantly changed can be specified in the script to update statistics. Cleanup You can clean up the database by executing following scripts. -- Clean up DROP TABLE [IncrStatTab] DROP PARTITION SCHEME [IncrStatSch] DROP PARTITION FUNCTION [IncrStatFn] GO Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Comparing a saved movement with other movement with Kinect

    - by Ewerton
    I need to develop an application where a user (physiotherapist) will perform a movement in front of the Kinect, I'll write the data movement in the database and then the patient will try to imitate this motion. The system will calculate the similarity between the movement recorded and executed. My first idea is, during recording (each 5 second, by example), to store the position (x, y, z) of the points and then compare them in the execution time(by patient). I know that this approach is too simple, because I imagine that in people of different sizes the skeleton is recognized differently, so the comparison is not reliable. My question is about the best way to compare a saved motion with a movement executed (on the fly). PS: Sorry by my English.

    Read the article

  • How big can my SharePoint 2010 installation be?

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). 3 years ago, I had published “How big can my SharePoint 2007 installation be?” Well, SharePoint 2010 has significant under the covers improvements. So, how big can your SharePoint 2010 installation be? There are three kinds of limits you should know about Hard limits that cannot be exceeded by design. Configurable that are, well configurable – but the default values are set for a pretty good reason, so if you need to tweak, plan and understand before you tweak. Soft limits, you can exceed them, but it is not recommended that you do. Before you read any of the limits, read these two important disclaimers - 1. The limit depends on what you’re doing. So, don’t take the below as gospel, the reality depends on your situation. 2. There are many additional considerations in planning your SharePoint solution scalability and performance, besides just the below. So with those in mind, here goes.   Hard Limits - Zones per web app 5 RBS NAS performance Time to first byte of any response from NAS must be less than 20 milliseconds List row size 8000 bytes driven by how SP stores list items internally Max file size 2GB (default is 50MB, configurable). RBS does not increase this limit. Search metadata properties 10,000 per item crawled (pretty damn high, you’ll never need to worry about it). Max # of concurrent in-memory enterprise content types 5000 per web server, per tenant Max # of external system connections 500 per web server PerformancePoint services using Excel services as a datasource No single query can fetch more than 1 million excel cells Office Web Apps Renders One doc per second, per CPU core, per Application server, limited to a maximum of 8 cores.   Configurable Limits - Row Size Limit 6, configurable via SPWebApplication.MaxListItemRowStorage property List view lookup 8 join operations per query Max number of list items that a single operation can process at one time in normal hours 5000 Configurable via SPWebApplication.MaxItemsPerThrottledOperation   Also you get a warning at 3000, which is configurable via SPWebApplication.MaxItemsPerThrottledOperationWarningLevel   In addition, throttle overrides can be requested, throttle overrides can be disabled, and time windows can be set when throttle is disabled. Max number of list items for administrators that a single operation can process at one time in normal hours 20000 Configurable via SPWebApplication.MaxItemsPerThrottledOperationOverride Enumerating subsites 2000 Word and Powerpoint co-authoring simultaneous editors 10 (Hard limit is 99). # of webparts on a page 25 Search Crawl DBs per search service app 10 Items per crawl db 25 million Search Keywords 200 per site collection. There is a max limit of 5000, which can then be modified by editing the web.config/client.config. Concurrent # of workflows on a content db 15. Workflows running in the timer service are not counted in this limit. Further workflows are queued. Can be configured via the Set-SPFarmConfig powershell commandlet. Number of events picked by the workflow timer job and delivered to workflows 100. You can increase this limit by running additional instances of the workflow timer service. Visio services file size 50MB Visio web drawing recalculation timeout 120 seconds Configurable via – Powershell commandlet Set-SPVisioPerformance Visio services minimum and maximum cache age for data connected diagrams 0 to 24 hours. Default is 60 minutes. Configurable via – Powershell commandlet Set-SPVisioPerformance   Soft Limits - Content Databases 300 per web app Application Pools 10 per web server Managed Paths 20 per web app Content Database Size 200GB per Content DB Size of 1 site collection 100GB # of sites in a site collection 250,000 Documents in a library 30 Million, with nesting. Depends heavily on type and usage and size of documents. Items 30 million. Depends heavily on usage of items. SPGroups one SPUser can be in 5000 Users in a site collection 2 million, depends on UI, nesting, containers and underlying user store AD Principals in a SPGroup 5000 SPGroups in a site collection 10000 Search Service Instances 20 Indexed Items in Search 100 million Crawl Log entries 100 million Search Alerts 1 million per search application Search Crawled Properties 1/2 million URL removals in search 100 removals per operation User Profiles 2 million per service application Social Tags 500 million per social database Comment on the article ....

    Read the article

  • Mark Hurd on Oracle's Strategy to Be the Best

    - by Tuula Fai
    Mark Hurd, President of Oracle, energized a packed audience this Monday morning at OpenWorld with his keynote outlining Oracle’s four-pillar strategy: Be the leader at every level of the technology stack—applications, middleware, database, operating system, virtual machine, servers, and storage Vertically integrate these levels into differentiated solutions Offer Fusion, the next generation of applications, which are modular and can run in the cloud, on-premise, or both (hybrid) Deliver this technology portfolio through industry lenses to help Oracle customers solve their problems while innovating and becoming more efficient. Hurd’s message resonated throughout Monday’s Customer Experience (CX) sessions as we learned about Oracle’s investment in integrating its best-of-breed CX solutions to deliver an end-to-end suite that addresses every part of the customer lifecycle. For example, in the area of customer service, Oracle is developing enhancements to help contact center agents: Better understand customer needs through social listening tools that are integrated with knowledge management Empower themselves with internal collaboration and mobility tools Adapt to customer needs by engaging them through chat during a service or commerce interaction so they can deliver a great customer experience while transforming from a cost- into a profit center.

    Read the article

  • 'Development dashboard' web application

    - by espais
    Hi all, I am not sure if something like this exists in that it is ready out of the box. I currently have some web space that I use for various projects, and I would like to setup an area for some friends and I to develop web applications together. My ideal setup would be to create a folder, say, webdev.domain.com. We could all go to this domain, login, and then be able to setup new applications, pick which language will be used, setup database tables, allow HTML based file uploading, and create sub-folders to basically have a test bed for the applications. In retrospect, it seems like I'm describing a limited version of cpanel. I could come up with something in Drupal I'm sure, but I don't want to have to really spend time configuring much. Like I said, I want to install it and have minimal configuration. Does something like this exist (preferably in open-source)?

    Read the article

  • design pattern advice: graph -> computation

    - by csetzkorn
    I have a domain model, persisted in a database, which represents a graph. A graph consists of nodes (e.g. NodeTypeA, NodeTypeB) which are connected via branches. The two generic elements (nodes and branches will have properties). A graph will be sent to a computation engine. To perform computations the engine has to be initialised like so (simplified pseudo code): Engine Engine = new Engine() ; Object ID1 = Engine.AddNodeTypeA(TypeA.Property1, TypeA.Property2, …, TypeA.Propertyn); Object ID2 = Engine.AddNodeTypeB(TypeB.Property1, TypeB.Property2, …, TypeB.Propertyn); Engine.AddBranch(ID1,ID2); Finally the computation is performed like this: Engine.DoSomeComputation(); I am just wondering, if there are any relevant design patterns out there, which help to achieve the above using good design principles. I hope this makes sense. Any feedback would be very much appreciated.

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • A way for an Upstart event to be sent whenever ecryptfs homedir mounted/unmounted?

    - by David Olivier
    I have an encrypted homedir (ecryptfs) and I'm wanting to run a private mysql daemon with the database files in my homedir. The daemon should be started whenever the homedir is mounted, and stopped before the homedir is unmounted. It seems I have to write an Upstart script, which doesn't seem too hard; the problem is triggering it. Is there already any Upstart event that is sent on these occasions? Or must I insert an "initctl emit" somewhere? Where? It seems the encrypted homedir is mounted whenever I either open my GUI session or ssh to my account. Is there a common place in these two processes where I might insert code? (I don't want to patch and compile any C code, just insert maybe a few lines somewere.) David

    Read the article

  • Applicability of the Joel Test to web development companies

    - by dreftymac
    QUESTION: How can you re-write the questions of the Joel test to apply to web developers? 1. Do you use source control? (source control for all aspects of your app, including configuration, database and user-based settings?) 2. Can you make a build in one step? (can you deploy a site from staging to prod in 1 step?) ... 10. Do you have testers? (how do you test AJAX and CSS?) BACKGROUND: This is for people who work in a shop that does some web development but also uses some off-the-shelf tools like Drupal and Wordpress, but doing custom development on top of that. RELATED LINKS: http://www.joelonsoftware.com/articles/fog0000000043.html What do you think about the Joel Test?

    Read the article

< Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >