Search Results

Search found 43200 results on 1728 pages for 'large object pattern'.

Page 411/1728 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Silverlight Cream for November 20, 2011 - 2 -- #1170

    - by Dave Campbell
    In this Issue: Michael Washington, Oliver Fuh, Jeremy Likness, Derik Whittaker, Jesse Liberty, Jeff Blankenburg(-2-), and Michael Crump. Above the Fold: Silverlight: "Handling Extremely Large Data Sets in Silverlight" Jeremy Likness WP7: "31 Days of Mango | Day #8: Contacts API" Jeff Blankenburg LightSwitch: "LightSwitch Chat Application Using A Data Source Extension" Michael Washington Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Check out Shawn Wildermuth's take on the AppStore and WP7 in general: 40,000 Apps - What Does It Mean? Be sure to check out Jesse Liberty & Paul Betts new book: Programming Reactive Extensions and LINQ, I've just had a little time to look at mine, but don't let the size fool you... this is the good stuff! From SilverlightCream.com: LightSwitch Chat Application Using A Data Source Extension In his latest LightSwitch post, Michael Washington gives up code that will enable two people using the same LightSwitch app to chat. Great detailed tutorial as usual! Handling AdControl Fetching Exception WindowsPhoneGeek turns the blog reigns over to Oliver Fuh for this post about using the AdControl in your WP7 app and handling a common exception you get with the Microsoft AdControl Handling Extremely Large Data Sets in Silverlight In this excerpt from his book, Jeremy Likness discusses reading *LARGE* data sets with Silverlight using 3 different patterns: OData, WCF RIA Services, and MVVM. Using MVVM with the AutoCompleteTextBox in Silverlight 4 Derik Whittaker takes a break from WinRT to discuss the Silverlight 4 AutoCompleteTextBox and MVVM ... including a custom Behavior to allow the backing property to be updated and a command to trigger background searches Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Jesse Liberty scored Peter Torr on his Latest Yet Another Podcast .. talking about Multitasking on Windows Phone including background agents, the backstack, and other Mango features 31 Days of Mango | Day #8: Contacts API Jeff Blankenburg's Day 8 is about a new namespace on WP7: Microsoft.Phone.UserData ... now giving us the ability to treat the user's contact list like a local database 31 Days of Mango | Day #9: Calendar API On Day 9 in his series, Jeff Blankenburg revisits the Microsoft.Phone.UserData namespace and looks at another set of data: the calendar Want to Decompile Silverlight XAP files? Try JustDecompile Beta! Michael Crump has a post up about the new free developer productivity tool from Telerik that provides assembly browsing and decompiling: JustDecompile ... Just download it! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    This is the twentieth in a series of blog posts Im doing on the upcoming VS 2010 and .NET 4 release.  Todays blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  Youll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Denali Feature – Zoom Query Editor

    - by pinaldave
    SQL Server next version ‘Denali’ is coming up with very neat feature which can be used while presentations, group discussion or for people who prefers large fonts. I have increased the font size to 400 percentage and for the same reason they are very large. You can adjust the font size which is convenient to you. One more reason to go for next version of SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A Plea for Plain English

    - by Tony Davis
    The English language has, within a lifetime, emerged as the ubiquitous 'international language' of scientific, political and technical communication. On the one hand, learning a single, common language, International English, has made it much easier to participate in and adopt new technologies; on the other hand it must be exasperating to have to use English at international conferences, or on community sites, when your own language has a long tradition of scientific and technical usage. It is also hard to master the subtleties of using a foreign language to explain advanced ideas. This requires English speakers to be more considerate in their writing. Even if you’re used to speaking English, you may be brought up short by this sort of verbiage… "Business Intelligence delivering actionable insights is becoming more critical in the enterprise, and these insights require large data volumes for trending and forecasting" It takes some imagination to appreciate the added hassle in working out what it means, when English is a language you only use at work. Try, just to get a vague feel for it, using Google Translate to translate it from English to Chinese and back again. "Providing actionable business intelligence point of view is becoming more and more and more business critical, and requires that these insights and projected trends in large amounts of data" Not easy eh? If you normally use a different language, you will need to pause for thought before finally working out that it really means … "Every Business Intelligence solution must be able to help companies to make decisions. In order to detect current trends, and accurately predict future ones, we need to analyze large volumes of data" Surely, it is simple politeness for English speakers to stop peppering their writing with a twisted vocabulary that renders it inaccessible to everyone else. It isn’t just the problem of writers who use long words to give added dignity to their prose. It is the use of Colloquial English. This changes and evolves at a dizzying rate, adding new terms and idioms almost daily; it is almost a new and separate language. By contrast, ‘International English', is gradually evolving separately, at its own, more sedate, pace. As such, all native English speakers need to make an effort to learn, and use it, switching from casual colloquial patter into a simpler form of communication that can be widely understood by different cultures, even if it gives you less credibility on the street. Simple-Talk is based, at least in part, on the idea that technical articles can be written simply and clearly in a form of English that can be easily understood internationally, and that they can be written, with a little editorial help, by anyone, and read by anyone, regardless of their native language. Cheers, Tony.

    Read the article

  • Friday Fun: Snow Crusher

    - by Asian Angel
    It has probably been a long week whether you have already returned to work or are finishing up the last of your vacation time. If you are in need of some stress relief, then we have we the perfect game for you. This week you get to be totally fiendish and use a monster size snowball to destroy as many cars as possible at the local snow lodge. Snow Crusher The object of the game is simple…create as large of a monster snowball as you can and then send it down over the side of the mountain to destroy the cars at the snow lodge. You can choose from three different sizes of monster snowballs to create. We chose the “Snowflake Size” for our reign of destruction. Once you have chosen a monster snowball size, all that is left to do is select the control method that works best for you. As soon as you select the control method, your monster snowball creation will automatically begin. Keep in mind that the faster your snowball goes the harder it can become to steer if you make sudden movements… At the top you can watch your progress towards the drop-off point and the green boxes highlighted at the bottom indicate how large of an item (such as trees or boulders) your snowball can roll over and add to the total mass. Snowball speed is shown in the lower right corner. Time to roll! As soon as the first green box is lit up you can start adding small trees to your snowball’s mass. You will want to avoid larger items as you go because they will penalize your score, slow you down, and reduce the size of your snowball! Halfway to the drop-off point and our snowball is now able to grab up larger trees. If you have not hit any large items along the way, your snowball will definitely be moving along at a good rate by now. When you reach the end of the mass building area, your snowball will pop out into the open and get ready to drop off over the side of the mountain. Go snowball go! Yes! Thirteen cars crushed and ready for the scrap yard… If the “Snowflake Size” snowball can do this, just think what the “Avalanche Size” can do with three minutes of time to build up mass! Have fun with those monster snowballs! Play Snow Crusher Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper Enjoy Christmas Beyond the Holiday with Christmas Eve Crisis Parrotfish Extends the Number of Services Accessible in Twitter Previews

    Read the article

  • World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

    - by Brian
    Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark. The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB. This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour. The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11. Performance Landscape Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch. PeopleSoft Financials Benchmark, Version 9.1 Solution Under Test Batch (min) SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92 Results from PeopleSoft Financials Benchmark 9.0. PeopleSoft Financials Benchmark, Version 9.0 Solution Under Test Batch (min) Batch with Online (min) SPARC Enterprise M4000 (Web/App) SPARC Enterprise M5000 (DB) 33.09 34.72 SPARC T3-1 (Web/App) SPARC Enterprise M5000 (DB) 35.82 37.01 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 128 GB memory Storage Configuration: 1 x Sun Storage F5100 Flash Array (for database and redo logs) 2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup) Software Configuration: Oracle Solaris 11 11/11 SRU 7.5 Oracle Database 11g Release 2 (11.2.0.3) PeopleSoft Financials 9.1 Feature Pack 2 PeopleSoft Supply Chain Management 9.1 Feature Pack 2 PeopleSoft PeopleTools 8.52 latest patch - 8.52.03 Oracle WebLogic Server 10.3.5 Java Platform, Standard Edition Development Kit 6 Update 32 Benchmark Description The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN PeopleSoft Financial Management oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • Implementing an Interceptor Using NHibernate’s Built In Dynamic Proxy Generator

    - by Ricardo Peres
    NHibernate 3.2 came with an included proxy generator, which means there is no longer the need – or the possibility, for that matter – to choose Castle DynamicProxy, LinFu or Spring. This is actually a good thing, because it means one less assembly to deploy. Apparently, this generator was based, at least partially, on LinFu. As there are not many tutorials out there demonstrating it’s usage, here’s one, for demonstrating one of the most requested features: implementing INotifyPropertyChanged. This interceptor, of course, will still feature all of NHibernate’s functionalities that you are used to, such as lazy loading, and such. We will start by implementing an NHibernate interceptor, by inheriting from the base class NHibernate.EmptyInterceptor. This class does not do anything by itself, but it allows us to plug in behavior by overriding some of its methods, in this case, Instantiate: 1: public class NotifyPropertyChangedInterceptor : EmptyInterceptor 2: { 3: private ISession session = null; 4:  5: private static readonly ProxyFactory factory = new ProxyFactory(); 6:  7: public override void SetSession(ISession session) 8: { 9: this.session = session; 10: base.SetSession(session); 11: } 12:  13: public override Object Instantiate(String clazz, EntityMode entityMode, Object id) 14: { 15: Type entityType = Type.GetType(clazz); 16: IProxy proxy = factory.CreateProxy(entityType, new _NotifyPropertyChangedInterceptor(), typeof(INotifyPropertyChanged)) as IProxy; 17: 18: _NotifyPropertyChangedInterceptor interceptor = proxy.Interceptor as _NotifyPropertyChangedInterceptor; 19: interceptor.Proxy = this.session.SessionFactory.GetClassMetadata(entityType).Instantiate(id, entityMode); 20:  21: this.session.SessionFactory.GetClassMetadata(entityType).SetIdentifier(proxy, id, entityMode); 22:  23: return (proxy); 24: } 25: } Then we need a class that implements the NHibernate dynamic proxy behavior, let’s place it inside our interceptor, because it will only need to be used there: 1: class _NotifyPropertyChangedInterceptor : NHibernate.Proxy.DynamicProxy.IInterceptor 2: { 3: private PropertyChangedEventHandler changed = delegate { }; 4:  5: public Object Proxy 6: { 7: get; 8: set;} 9:  10: #region IInterceptor Members 11:  12: public Object Intercept(InvocationInfo info) 13: { 14: Boolean isSetter = info.TargetMethod.Name.StartsWith("set_") == true; 15: Object result = null; 16:  17: if (info.TargetMethod.Name == "add_PropertyChanged") 18: { 19: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 20: this.changed += propertyChangedEventHandler; 21: } 22: else if (info.TargetMethod.Name == "remove_PropertyChanged") 23: { 24: PropertyChangedEventHandler propertyChangedEventHandler = info.Arguments[0] as PropertyChangedEventHandler; 25: this.changed -= propertyChangedEventHandler; 26: } 27: else 28: { 29: result = info.TargetMethod.Invoke(this.Proxy, info.Arguments); 30: } 31:  32: if (isSetter == true) 33: { 34: String propertyName = info.TargetMethod.Name.Substring("set_".Length); 35: this.changed(this.Proxy, new PropertyChangedEventArgs(propertyName)); 36: } 37:  38: return (result); 39: } 40:  41: #endregion 42: } What this does for every interceptable method (those who are either virtual or from the INotifyPropertyChanged) is: For methods that came from the INotifyPropertyChanged interface, add_PropertyChanged and remove_PropertyChanged (yes, events are methods ), we add an implementation that adds or removes the event handlers to the delegate which we declared as changed; For all the others, we direct them to the place where they are actually implemented, which is the Proxy field; If the call is setting a property, it fires afterwards the PropertyChanged event. In order to use this, we need to add the interceptor to the Configuration before building the ISessionFactory: 1: using (ISessionFactory factory = cfg.SetInterceptor(new NotifyPropertyChangedInterceptor()).BuildSessionFactory()) 2: { 3: using (ISession session = factory.OpenSession()) 4: using (ITransaction tx = session.BeginTransaction()) 5: { 6: Customer customer = session.Get<Customer>(100); //some id 7: INotifyPropertyChanged inpc = customer as INotifyPropertyChanged; 8: inpc.PropertyChanged += delegate(Object sender, PropertyChangedEventArgs e) 9: { 10: //fired when a property changes 11: }; 12: customer.Address = "some other address"; //will raise PropertyChanged 13: customer.RecentOrders.ToList(); //will trigger the lazy loading 14: } 15: } Any problems, questions, do drop me a line!

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs

    GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs Speaker: Patrick Chanezon Track: Cloud Computing Time slot: F [15:30 - 16:15] Room: 2 Level: 101 Google is expanding our storage products by introducing Google Storage for Developers. It offers a RESTful API for storing and accessing data at Google. Developers can take advantage of the performance and reliability of Google's storage infrastructure, as well as the advanced security and sharing capabilities. We will demonstrate key functionality of the product as well as customer use cases. Google relies heavily on data analysis and has developed many tools to understand large datasets. Two of these tools are now available on a limited sign-up basis to developers: (1) BigQuery: interactive analysis of very large data sets and (2) Prediction API: make informed predictions from your data. We will demonstrate their use and give instructions on how to get access. From: GoogleDevelopers Views: 1 0 ratings Time: 39:27 More in Science & Technology

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • Addressing a variable in VB

    - by Jeff
    Why doesn't Visual Basic.NET have the addressof operator like C#? In C#, one can int i = 123; int* addr = &i; But VB has no equivalent counter part. It seems like it should be important. UPDATE Since there's some interest, Im copying my response to Strilanc below. The case I ran into didnt necessitate pointers by any means, but I was trying to trouble shoot a unit test that was failing and there was some confusion over whether or not an object being used at one point in the stack was the same object as an object several methods away.

    Read the article

  • Ubuntu 12.04 / 12.10 Randomly Freezing - nVidia?

    - by Alix Axel
    My Ubuntu install frequently freezes, sometimes showing a black screen (not very common anymore - in my latest installs), some other times the mouse and keyboard just fail to move and respond (not even Ctrl + Alt + F1 works) and some other times I'm able to move the mouse with a huge delay (2-5 seconds) but I'm not able to do/click anything. I have a pretty strong feeling that this problem is related to my graphic card drivers because: after hard reset, I usually get error reports about X.org / jockey it's common for artifacts to appear during loading / shutdown / whenever, for instance: pattern filled with £ during log off ugly-colored squared pattern during boot windows that are partially moved (i.e.: only the top half) Firefox renderings that leave the bottom ~30% of the page black These artifacts appear right before the system freezes. I've installed Ubuntu 12.04 LTS and after several failed attempts to get my dual monitor setup to work properly I tried installing the new 12.10 version, hoping that this new version would have this problem solved... Unfortunatly, that was not the case, so I reverted to Ubuntu 12.04. I've tried all the drivers in the Additional Drivers application (even the experimental ones), I've also tried the nvidia-current package from the PPA repository ubuntu-x-swat/x-updates as well as the nouveau OSS driver. Nothing (except no driver at all with a 640*480 resolution) at all seems stable. Here is the info of my graphic card: alix@alix-E500:~$ lspci | grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [GeForce 8400M G] (rev a1) alix@alix-E500:~$ sudo lshw -C video [sudo] password for alix: *-display description: VGA compatible controller product: G86 [GeForce 8400M G] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:fd000000-fdffffff memory:d0000000-dfffffff memory:fa000000-fbffffff ioport:cc00(size=128) memory:fe0e0000-fe0fffff Right now, I don't even have my 22" monitor connected as I can't even get my laptop display to work properly and without freezes. I've searched, read and tried all that I could (over several fresh reinstalls) to fix the problem, but so far, no solution has proven definitive. I'm sorry I can't precise which symptom maps to each driver but I've been trying to solve this one on my own without logging what I'm doing, perhaps someone here will be able to point me to a certain-fix solution, if not I'll keep updating this question as I go along. Please let me know if any more info is needed to pinpoint the exact problem. Trying out NVIDIA accelerated graphics driver (version 173). The scrolling, minimizing / maximizing windows takes between 2 and 5 seconds to finalize. Context menus also pop up very slowly and the typing seems delayed by ~1 second. No critical issues so far. Firefox rendering of the Save Edits button is consistently messed up (random black lines in the top). Trying out NVIDIA accelerated graphics driver (version current) [Recommended]. All the delays mentioned above and the buggy rendering of the Save Edits button are gone, but I'm noticing that the whole screen flashes black for a couple of microseconds and while I was writing this test for the first time, the bottom 30% of the screen went black and I couldn't do anything (not even Ctrl + Alt + F1 would work). Had to force a hard reset. Also, the system hanged a little for a couple of seconds with the fade out of the "Restart" menu. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-304). Same symptoms as before, it crashed once while I was trying to install Chromium and again after a hard reset when I was trying to remove the driver. The bottom of the screen did not went black and I could move my mouse both times. Ctrl + Alt + F1 didn't work. The ugly-colored pattern also showed up during the second boot. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-307). The system crashed as soon as I clicked something. Had to do a fresh re-install. Trying out Nouveau: Accelerated Open Source driver for nVidia cards. Artifacts still show up during boot but other than that this one seems stable. As soon as I connected my second monitor, the responsiveness dropped a lot, animations and video are somewhat slow. I'm gonna try this solution http://askubuntu.com/a/98871/9018 later on.

    Read the article

  • Looking for enterprise web application design inspiration [closed]

    - by Farshid
    I've checked many websites to be inspired about what the look and feel of a serious enterprise web-application should look like. But the whole I saw were designed for being used by single users and not serious corporate users. The designs I saw were mostly happy-colored, and looked like being developed by a small team of eager young passionate developers. But what I'm looking for, are showcases of serious web apps' designs (e.g. web apps developed by large corporations) that are developed for being used by a large number of corporate uses. Can you suggest me sources for this kind of inspiration?

    Read the article

  • Day 4 - Game Sprites In Action

    - by dapostolov
    Yesterday I drew an image on the screen. Most exciting, but ... I spent more time blogging about it then actual coding. So this next little while I'm going to streamline my game and research and simply post key notes. Quick notes on the last session: The most important thing I wanted to point out were the following methods:           spriteBatch.Begin(SpriteBlendMode.AlphaBlend);           spriteBatch.Draw(sprite, position, Color.White);           spriteBatch.End(); The spriteBatch object is used to draw Textures and a 2D texture is called a Sprite A texture is generally an image, which is called an Asset in XNA The Draw Method in the Game1.cs is looped (until exit) and utilises the spriteBatch object to draw a Scene To begin drawing a Scene you call the Begin Method. To end a Scene you call the End Method. And to place an image on the Scene you call the Draw method. The most simple implementation of the draw method is:           spriteBatch.Draw(sprite, position, Color.White); 1) sprite - the 2D texture you loaded to draw 2) position - the 2d vector, a set of x & y coordinates 3) Color.White - the tint to apply to the texture, in this case, white light = nothing, nada, no tint. Game Sprites In Action! Today, I played around with Draw methods to get comfortable with their "quirks". The following is an example of the above draw method, but with more parameters available for us to use. Let's investigate!             spriteBatch.Draw(sprite, position2, null, Color.White, MathHelper.ToRadians(45.0f), new Vector2(sprite.Width / 2, sprite.Height / 2), 1.0F, SpriteEffects.None, 0.0F); The parameters (in order): 1) sprite  the texture to display 2) position2 the position on the screen / scene this can also be a rectangle 3) null the portion of the image to display within an image null = display full image this is generally used for animation strips / grids (more on this below) 4) Color.White Texture tinting White = no tint 5) MathHelper.ToRadians(45.0f) rotation of the object, in this case 45 degrees rotates from the set plotting point. 6) new Vector(0,0) the plotting point in this case the top left corner the image will rotate from the top left of the texture in the code above, the point is set to the middle of the image. 7) 1.0f Image scaling (1x) 8) SpriteEffects.None you can flip the image horizontally or vertically 9) 0.0f The z index of the image. 0 = closer, 1 behind? And playing around with different combinations I was able to come up with the following whacky display:   Checking off Yesterdays Intention List: learn game development terminology (in progress) - We learned sprite, scene, texture, and asset. how to place and position (rotate) a static image on the screen (completed) - The thing to note was, it's was in radians and I found a cool helper method to convert degrees into radians. Also, the image rotates from it's specified point. how to layer static images on the screen (completed) - I couldn't seem to get the zIndex working, but one things for sure, the order you draw the image in also determines how it is rendered on the screen. understand image scaling (in progress) - I'm not sure I have this fully covered, but for the most part plug a number in the scaling field and the image grows / shrinks accordingly. can we reuse images? (completed) - yes, I loaded one image and plotted the bugger all over the screen. understand how framerate is handled in XNA (in progress) - I hacked together some code to display the framerate each second. A framerate of 60 appears to be the standard. Interesting to note, the GameTime object does provide you with some cool timing capabilities, such as...is the game running slow? Need to investigate this down the road. how to display text , basic shapes, and colors on the screen (in progress) - i got text rendered on the screen, and i understand containing rectangles. However, I didn't display "shapes" & "colors" how to interact with an image (collision of user input?) (todo) how to animate an image and understand basic animation techniques (in progress) - I was able to create a stripe animation of numbers ranging from 1 - 4, each block was 40 x 40 pixles for a total stripe size of 160 x 40. Using the portion (source Rectangle) parameter, i limited this display to each section at varying intervals. It was interesting to note my first implementation animated at rocket speed. I then tried to create a smoother animation by limiting the redraw capacity, which seemed to work. I guess a little more research will have to be put into this for animating characters / scenes. how to detect colliding images or screen edges (todo) - but the rectangle object can detect collisions I believe. how to manipulate the image, lets say colors, stretching (in progress) - I haven't figured out how to modify a specific color to be another color, but the tinting parameter definately could be used. As for stretching, use the rectangle object as the positioning and the image will stretch to fit! how to focus on a segment of an image...like only displaying a frame on a film reel (completed) - as per basic animation techniques what's the best way to manage images (compression, storage, location, prevent artwork theft, etc.) (todo) Tomorrows Intention Tomorrow I am going to take a stab at rendering a game menu and from there I'm going to investigate how I can improve upon the code and techniques. Intention List: Render a menu, fancy or not Show the mouse cursor Hook up click event A basic animation of somesort Investigate image / menu techniques D.

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • Flash game size and distribution between asset types

    - by EyeSeeEm
    I am currently developing a Flash game and am planning on a large amount of graphics and audio assets, which has led to some questions about Flash game size. By looking at some of the popular games on NG, there seem to be many in the 5-10Mb and a few in the 10-25Mb range. I was wondering if anyone knew of other notable large-scale games and what their sizes were, and if there have been any cases of games being disadvantaged because of their size. What is a common distribution of game size between code, graphics and audio? I know this can vary strongly, but I would like to hear your thoughts on an average case for a high-quality game.

    Read the article

  • How does key-based caching work?

    - by Dominic Santos
    I recently read an article on the 37Signals blog and I'm left wondering how it is that they get the cache key. It's all well and good having a cache key that includes the object's timestamp (this means that when you update the object the cache will be invalidated); but how do you then use the cache key in a template without causing a DB hit for the very object that you are trying to fetch from the cache. Specifically, how does this affect One to Many relations where you are rendering a Post's Comments for example. Example in Django: {% for comment in post.comments.all %} {% cache comment.pk comment.modified %} <p>{{ post.body }}</p> {% endcache %} {% endfor %} Is caching in Rails different to just requests to memcached for example (I know that they convert your cache key to something different). Do they also cache the cache key?

    Read the article

  • Looking for enterprise web application design inspiration

    - by Farshid
    I've checked many websites to be inspired about what the look and feel of a serious enterprise web-application should look like. But the whole I saw were designed for being used by single users and not serious corporate users. The designs I saw were mostly happy-colored, and looked like being developed by a small team of eager young passionate developers. But what I'm looking for, are showcases of serious web apps' designs (e.g. web apps developed by large corporations) that are developed for being used by a large number of corporate uses. Can you suggest me sources for this kind of inspiration?

    Read the article

  • Interview with Ronald Bradford about MySQL Connect

    - by Keith Larson
    Ronald Bradford,  an Oracle ACE Director has been busy working with  database consulting, book writing (EffectiveMySQL) while traveling and speaking around the world in support of MySQL. I was able to take some of his time to get an interview on this thoughts about theMySQL Connect conference. Keith Larson: What where your thoughts when you heard that Oracle was going to provide the community the MySQL Conference ?Ronald Bradford: Oracle has already been providing various different local community events including OTN Tech Days and  MySQL community days. These are great for local regions both in the US and abroad.  In previous years there has been an increase of content at Oracle Open World, however that benefits the Oracle community far more then the MySQL community.  It is good to see that Oracle is realizing the benefit in providing a large scale dedicated event for the MySQL community that includes speakers from the MySQL development teams, invested companies in the ecosystem and other community evangelists.I fully expect a successful event and look forward to hopefully seeing MySQL Connect at the upcoming Brazil and Japan OOW conferences and perhaps an event on the East Coast.Keith Larson: Since you are part of the content committee, what did you think of the submissions that were received during call for papers?Ronald Bradford: There was a large number of quality submissions to the number of available presentation sessions. As with the previous years as a committee member for the annual MySQL conference, there is always a large variety of common cornerstone MySQL features as well as new products and upcoming companies sharing their MySQL experiences. All of the usual major players in the ecosystem will in presenting at MySQL Connect including Facebook, Twitter, Yahoo, Continuent, Percona, Tokutek, Sphinx and Amazon to name a few.  This is ensuring the event will have a large number of quality speakers and a difficult time in choosing what to attend. Keith Larson: What sessions do you look forwarding to attending? Ronald Bradford: As with most quality conferences you can only be in one place at one time, so with multiple tracks per session it is always difficult to decide. The continued work and success with MySQL Cluster, and with a number of sessions I am sure will be popular. The features that interest me the most are around the optimizer, where there are several sessions on new features, and on the importance of backups. There are three presentations in this area to choose from.Keith Larson: Are you going to cover any of the content in your books at your MySQL Connect sessions?Ronald Bradford: I will be giving two presentations at MySQL Connect. The first will include the techniques available for creating better indexes where I will be touching on some aspects of the first Effective MySQL book on Optimizing SQL Statements.  In my second presentation from experiences of managing 500+ AWS MySQL instances, I will be touching on areas including SQL tuning, backup and recovery and scale out with replication.   These are the key topics of the initial books in the Effective MySQL series that focus on performance, scalability and business continuity.  The books however cover a far greater amount of detail then can be presented in a 1 hour session. Keith Larson: What features of MySQL 5.6 do you look forward to the most ?Ronald Bradford: I am very impressed with the optimizer trace feature. The ability to see exposed information is invaluable not just for MySQL 5.6, but to also apply information discerned for optimizing SQL statements in earlier versions of MySQL.  Not everybody understands that it is easy to deploy a MySQL 5.6 slave into an existing topology running an older version if MySQL for evaluation of many new features.  You can use the new mysqlbinlog streaming feature for duplicating master binary logs on an older version with a MySQL 5.6 slave.  The improvements in instrumentation in the Performance Schema are exciting.   However, as with my upcoming Replication Techniques in Depth title, that will be available for sale at MySQL Connect, there are numerous replication features, some long overdue with provide significant management benefits. Crash Save Slaves, Global transaction Identifiers (GTID)  and checksums just to mention a few.Keith Larson: You have been to numerous conferences, what would you recommend for people at the conference? Ronald Bradford: Make the time to meet and introduce yourself to the speakers that cover the topics that most interest you. The MySQL ecosystem has a very strong community.  The relationships you build with presenters, developers and architects in MySQL can be invaluable, however they are created over time. Get to know these people, interact with them over time.  This is the opportunity to learn more then just the content from a 1 hour session. Keith Larson: Any additional tips to handling the long hours ? Ronald Bradford: Conferences can be hard, especially with all the post event drinking.  This is a two day event and I am sure will include additional events on Friday and Saturday night so come well prepared, and leave work behind. Take the time to learn something new.   You can always catchup on sleep later. Keith Larson: Thank you so much for taking some time to do this I look forward to seeing you at the MySQL Connect conference.  Please stay tuned here for more updates on MySQL. 

    Read the article

  • What are the main programmers web sites in other countries?

    - by David
    Looking at the demographics of sites like this one, there seems to be a very heavy US weighing of users. I know there have been discussions about using English as a programmer, but the are some large countries with large programmer bases that must be using localized sites. Countries like Brazil and China, in particular, what sites do programmers go for discussions? Do a lot of the threads link up to English speaking sites like these? In France they obviously prefer their own language, what do the equivalent sites look like? I don't want to turn this into an English language speaking discussion like this one, it's more a general trans-cultural programming curiosity.

    Read the article

  • Sub routing in a SPA site

    - by Anders
    I have a SPA site that I'm working on, I have a requirement that you can have subroutes for a page view model. Im currently using this 'pattern' for the site MyApp.FooViewModel = MyApp.define({ meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo" }, init: function (queryResult) { }, prototype: { } }); In the master view model I have a route table this.navigation(new MyApp.RoutesViewModel({ Home: { model: MyApp.HomeViewModel, route: String.empty }, Foo: { model: MyApp.FooViewModel } })); The meta object defines which query should populate the top level view model when its invoked through sammyjs, this is all fine but it does not support sub routing My plan is to change the meta object so that it can (optional offcourse) look like this meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo", route: { barId: MyApp.BarViewModel } } When sammyjs detects a barId in the query string the Barmodel will be executed and populated through its own meta object. Is this a good design?

    Read the article

  • What do you do when you realize your job requires you to do something out of your depth?

    - by Billy ONeal
    For a large software project recently, I was really out of my depth. And I did actually know this; and that the only reason I was employed was mostly a lack of other qualified candidates. The job was to build a large application on top of PHP/MySQL, a system I had little experience with. (I did advise the employer of this beforehand -- I've been spoiled by C# ASP.NET/MVC and MSSQL Server) The main reason I applied was location, location, location -- on campus jobs which actually have any programming component are relatively rare. For almost a year and a half I've slogged through this, and I think I can say I know (at least somewhat) what I'm doing now. I've made some mistakes, torn out some hair, and moved on. (I'm still working on this system nowadays, but I no longer feel completely lost) In the future though, I'd like to keep my personal and professional self a little healthier than what occurred in this case. So I'm curious -- what's the best way to handle a situation like this?

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >