Search Results

Search found 36925 results on 1477 pages for 'large xml document'.

Page 599/1477 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • BIWA Wednesday TechCast Series - Opposition to Data Warehouse Initiatives

    - by jenny.gelhausen
    BIWA Wednesday TechCast Series - 19th Event! Opposition to Data Warehouse Initiatives Please join us for this webcast on Wednesday, March 24, 12 noon Eastern or check your local area's time Webcast is open to clients, prospects and partners. No matter how good your technology and technical skills, organizational issues can derail a data warehousing or BI project. Therefore BIWA presents a vital topic that crosses product boundaries: organizational resistance to data warehouse initiatives - how to recognize it and what to do about it. Many a DW/BI professional has been surprised by organizational resistance to DW/BI initiatives. Yet real organizational imperatives may be behind this apparently irrational behavior. Based on in-depth interviews with IT professionals, industry consultants, and power users, our speaker Bruce Jenks will present his research findings about what drives organizational resistance to data warehouse initiatives. The talk will cover specific behaviors that can signal organizational resistance to a data warehouse program and what organizations have done to address such resistance. Presenter: Bruce Jenks of Dun and Bradstreet Bruce Jenks has over 20 years experience in data warehousing and business intelligence, much of it as a consultant to large organizations spanning the US. Bruce's data warehousing clients have included firms such as Sprint, Gallo Wines, Southern California Edison, The Gap, and Safeway. He started his data warehousing career at Metaphor Computers, a pioneering DW/BI firm from which a number of industry luminaries sprang including Ralph Kimball (author of The Data Warehouse Toolkit ). Bruce continued his data warehousing career at HP, Stanford University and other firms. Bruce is currently completing his doctorate in business administration at Golden Gate University, and today's material arises from his doctoral research. He is also a principal consultant for Dun and Bradstreet. Audio Dial-In: 866 682 4770 Audio Meeting ID: 1683901 Audio Meeting Passcode: 334451 Web Conference: Please register at https://www1.gotomeeting.com/register/807185273 After you register you will be provided with a link to the TechCast. Invitation to Speakers: All BIWA members and Oracle professionals (experts, end users, managers, DBAs, developers, data analysts, ISVs, partners, etc.) may submit abstracts for 45 minute technical webcasts to our Oracle BIWA (IOUG SIG) Community. Submit your BIWA TechCast abstract today! BIWA is a worldwide forum with over 2000 members who are business intelligence, warehousing and analytics professionals. BIWA presents information, experiences and best practices in successfully deploying Oracle Database-centric BI, Data Warehousing, and Analytics products, features and Options--the Oracle Database "BIWA" platform. Attendance Information & Replays at the BIWA website: oraclebiwa.org var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Big Data – Buzz Words: What is NoSQL – Day 5 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored the basic architecture of Big Data . In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – NoSQL. What is NoSQL? NoSQL stands for Not Relational SQL or Not Only SQL. Lots of people think that NoSQL means there is No SQL, which is not true – they both sound same but the meaning is totally different. NoSQL does use SQL but it uses more than SQL to achieve its goal. As per Wikipedia’s NoSQL Database Definition – “A NoSQL database provides a mechanism for storage and retrieval of data that uses looser consistency models than traditional relational databases.“ Why use NoSQL? A traditional relation database usually deals with predictable structured data. Whereas as the world has moved forward with unstructured data we often see the limitations of the traditional relational database in dealing with them. For example, nowadays we have data in format of SMS, wave files, photos and video format. It is a bit difficult to manage them by using a traditional relational database. I often see people using BLOB filed to store such a data. BLOB can store the data but when we have to retrieve them or even process them the same BLOB is extremely slow in processing the unstructured data. A NoSQL database is the type of database that can handle unstructured, unorganized and unpredictable data that our business needs it. Along with the support to unstructured data, the other advantage of NoSQL Database is high performance and high availability. Eventual Consistency Additionally to note that NoSQL Database may not provided 100% ACID (Atomicity, Consistency, Isolation, Durability) compliance.  Though, NoSQL Database does not support ACID they provide eventual consistency. That means over the long period of time all updates can be expected to propagate eventually through the system and data will be consistent. Taxonomy Taxonomy is the practice of classification of things or concepts and the principles. The NoSQL taxonomy supports column store, document store, key-value stores, and graph databases. We will discuss the taxonomy in detail in later blog posts. Here are few of the examples of the each of the No SQL Category. Column: Hbase, Cassandra, Accumulo Document: MongoDB, Couchbase, Raven Key-value : Dynamo, Riak, Azure, Redis, Cache, GT.m Graph: Neo4J, Allegro, Virtuoso, Bigdata As of now there are over 150 NoSQL Database and you can read everything about them in this single link. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • An XEvent a Day (6 of 31) – Targets Week – asynchronous_file_target

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - ring_buffer , looked at the ring_buffer Target in Extended Events and how it outputs the raw Event data in an XML document.  Today I’m going to go over the details of the other Target in Extended Events that captures raw Event data, the asynchronous_file_target. What is the asynchronous_file_target? The asynchronous_file_target holds the raw format Event data in a proprietary binary file format that persists beyond server restarts and can be provided to another...(read more)

    Read the article

  • How to perform feature upgrade in SharePoint2010 part2

    - by ybbest
    In my last post, I showed you how to perform feature upgrade and upgrade my feature from 0.0.0.0 to 1.0.0.1. In this post, I’d like to continue on this topic and upgrade the feature again. For the first version of my solution, I deploy a document library with a custom document set content type and then upgrade the solution so I index the application number column. Now , I will create a new version of the solution so that it will remove the threshold of the list. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. The version 1.1 will be in the Upgrade folder and version 1.2 will be in the Upgrade2 folder. You need to make the following changes to the existing solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. </pre> public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); switch (upgradeActionName) { case "IndexApplicationNumber": if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; case "RemoveListThreshold": applicationLibrary.EnableThrottling = false; applicationLibrary.Update(); break; } } <pre> 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.2" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="1.0.0.1" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" References: Feature upgrade.

    Read the article

  • LinqToXML removing empty xmlns attributes &amp; adding attributes like xmlns:xsi, xsi:schemaLocation

    - by Rohit Gupta
    Suppose you need to generate the following XML: 1: <GenevaLoader xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 2: xsi:schemaLocation="http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd" 3: xmlns="http://www.advent.com/SchemaRevLevel401/Geneva"> 4: <PriceRecords> 5: <PriceRecord> 6: </PriceRecord> 7: </PriceRecords> 8: </GenevaLoader> Normally you would write the following C# code to accomplish this: 1: const string ns = "http://www.advent.com/SchemaRevLevel401/Geneva"; 2: XNamespace xnsp = ns; 3: XNamespace xsi = XNamespace.Get("http://www.w3.org/2001/XMLSchema-instance"); 4:  5: XElement root = new XElement( xnsp + "GenevaLoader", 6: new XAttribute(XNamespace.Xmlns + "xsi", xsi.NamespaceName), 7: new XAttribute( xsi + "schemaLocation", "http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd")); 8:  9: XElement priceRecords = new XElement("PriceRecords"); 10: root.Add(priceRecords); 11:  12: for(int i = 0; i < 3; i++) 13: { 14: XElement price = new XElement("PriceRecord"); 15: priceRecords.Add(price); 16: } 17:  18: doc.Save("geneva.xml"); The problem with this approach is that it adds a additional empty xmlns arrtribute on the “PriceRecords” element, like so : 1: <GenevaLoader xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.advent.com/SchemaRevLevel401/Geneva masterschema.xsd" xmlns="http://www.advent.com/SchemaRevLevel401/Geneva"> 2: <PriceRecords xmlns=""> 3: <PriceRecord> 4: </PriceRecord> 5: </PriceRecords> 6: </GenevaLoader> The solution is to add the xmlns NameSpace in code to each child and grandchild elements of the root element like so : 1: XElement priceRecords = new XElement( xnsp + "PriceRecords"); 2: root.Add(priceRecords); 3:  4: for(int i = 0; i < 3; i++) 5: { 6: XElement price = new XElement(xnsp + "PriceRecord"); 7: priceRecords.Add(price); 8: }

    Read the article

  • NHibernate Tools

    - by Ricardo Peres
    Felice Pollano is the author of a two great new tools for working with NHibernate: NH Workbench: an IDE for writing HQL queries against a model db2hbm: generation of .hbm.xml files from a database (currently only SQL Server, more to come) I suggest you give them a try and give Felix your feedback!

    Read the article

  • Show raw Text Code from a URL with CodePaste.NET

    - by Rick Strahl
    I introduced CodePaste.NET more than 2 years ago. In case you haven't checked it out it's a code-sharing site where you can post some code, assign a title and syntax scheme to it and then share it with others via a short URL. The idea is super simple and it's not the first time this has been done, but it's focused on Microsoft languages and caters to that crowd. Show your own code from the Web There's another feature that I tweeted about recently that's been there for some time, but is not used very much: CodePaste.NET has the ability to show raw text based code from a URL on the Web in syntax colored format for any of the formats provided. I use this all the time with code links to my Subversion repository which only displays code as plain text. Using CodePaste.NET allows me to show syntax colored versions of the same code. For example I can go from this URL: http://www.west-wind.com:8080/svn/WestwindWebToolkit/trunk/Westwind.Utilities/SupportClasses/PropertyBag.cs To a nicely colored source code view at this Url: http://codepaste.net/ShowUrl?url=http%3A%2F%2Fwww.west-wind.com%3A8080%2Fsvn%2FWestwindWebToolkit%2Ftrunk%2FWestwind.Utilities%2FSupportClasses%2FPropertyBag.cs&Language=C%23 which looks like this:   Use the Form or access URLs directly To get there navigate to the Web Code icon on the CodePaste.NET site and paste your original URL and select a language to display: The form creates a link shown above which has two query string parameters: url - The URL for the raw text on the Web language -  The code language used for syntax highlighting Note that parameters must be URL encoded to work especially the # in C# because otherwise the # will be interpreted by the browser as a hash tag to jump to in the target URL. The URL must be Web accessible so that CodePaste can download it and then apply the syntax coloring. It doesn't work with localhost urls for example. The code returned must be returned in plain text - HTML based text doesn't work. Hope some of you find this a useful feature. Enjoy…© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • AS11 Oracle B2B Sync Support - Series 1

    - by sinkarbabu.kirubanithi
    Synchronous message support has been enabled in Oracle B2B 11G. This would help customers to send the business message and receive the corresponding business response synchronously. We would like to keep this blog entry as three part series, first one would carry Oracle B2B configuration related details followed by 'how it can be consumed and utilized in an enterprise' using composites backed model. And, the last one would talk about more sophisticated seeded support built on Oracle B2B platform (Note: the last one is still in description phase and ETA hasn't been finalized yet). Details: In an effort to enable synchronous processing in Oracle B2B, we provided a platform using the existing 'callout' mechanism. In this case, we expect the 'callout' attached to the agreement to deliver incoming business message (inbound) to back-end application and get the corresponding business response from back-end and deliver it to Oracle B2B as its output. The output of 'callout' would be processed as outbound message and the same will be attached as a response for the inbound message. Requirements to enable Sync Support: Outbound side: Outbound Agreement - to send business message request Inbound Agreement - to receive business message response Inbound side: Inbound Agreement - to receive business message request Outbound Agreement - to send business message response Agreement Level Callout - to deliver the inbound request to back-end and get the corresponding business response This feature is supported only for HTTP based transport to exchange messages with Trading Partners. One may initiate the outbound message (enqueue) using any of the available Transports in Oracle B2B. Configuration: Outbound side: Please add "syncresponse=true" as "Additional Transport Header" parameter for remote Trading Partner's HTTP delivery channel configuration. This would enable Oracle B2B to process the HTTP response as inbound message and deliver the same to back-end application. All other configuration related to Agreement and Document setup remain same. Inbound side: There is no change in Agreement and Document setup. To enable "Sync Support", you need to build a 'callout' that takes the responsibility of delivering inbound message to back-end and get the corresponding business response from the back-end and attach the same as its output. Oracle B2B treats the output of 'callout' as outbound message and deliver it to Trading Partner as synchronous HTTP response. The requests that needs to processed synchronously should be received by "syncreceiver" (http://:/b2b/syncreceiver) endpoint in Oracle B2B. Exception Handling: Existing Oracle B2B exception handling applies to this use case as well. Here's the sample callout, SampleSyncCallout.java We will get you second part that talks about 'SOA composites' backed model to design the "Sync Support" use case from back-end to Trading Partners, stay tuned.

    Read the article

  • An XEvent a Day (7 of 31) – Targets Week – bucketizers

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - asynchronous_file_target , looked at the asynchronous_file_target Target in Extended Events and how it outputs the raw Event data in an XML document. Continuing with Targets week today, we’ll look at the bucketizer targets in Extended Events which can be used to group Events based on the Event data that is being returned. What is the bucketizer? The bucketizer performs grouping of Events as they are processed by the target into buckets based on the Event data and...(read more)

    Read the article

  • Don’t Delay - Apply the New 12.1.3 Procurement Rollup Patch NOW!

    - by user793553
    A new critical rollup patch (RUP) has just been released by Development for our 12.1.3 Procurement customers.  This new Patch 14254641:R12.PRC_PF.B contains important fixes for Purchasing, Internet Supplier Portal (iSupplier), Sourcing  and iProcurement (Web).  Go to My Oracle Support and enter Document ID 1468883.1 in the Knowledge Base search. This note contains information on who should apply the patch, how to apply the patch, critical fixes and important new features.

    Read the article

  • Oracle EBS R12.1.1 system09.dbf file corruption Bug

    - by longchun.zhu
    ??????,??????????????????,???? ?????????????.. ???????????,??????,???????????? After Installing or Upgrading Perform the following steps after installing or upgrading to Release 12.1.1 and before allowing users to access the system. Manually fix database dbf file If you installed 12.1.1 with a startCD of 12.1.1.9 or earlier (see Oracle Applications Release Notes, Release 12.1.1 My Oracle Support Document 798258.1), you must run the following sql commands to fix a particular corrupted dbf file: $ sqlplus/nolog sql connect / as sysdba sql alter database datafile '[full path of system09.dbf]' resize 1000M; sql alter database datafile '[full path of system09.dbf]' resize 1500M;

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Change Tracking

    - by Ricardo Peres
    You may recall my last post on Change Data Control. This time I am going to talk about other option for tracking changes to tables on SQL Server: Change Tracking. The main differences between the two are: Change Tracking works with SQL Server 2008 Express Change Tracking does not require SQL Server Agent to be running Change Tracking does not keep the old values in case of an UPDATE or DELETE Change Data Capture uses an asynchronous process, so there is no overhead on each operation Change Data Capture requires more storage and processing Here's some code that illustrates it's usage: -- for demonstrative purposes, table Post of database Blog only contains two columns, PostId and Title -- enable change tracking for database Blog, for 2 days ALTER DATABASE Blog SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); -- enable change tracking for table Post ALTER TABLE Post ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); -- see current records on table Post SELECT * FROM Post SELECT * FROM sys.sysobjects WHERE name = 'Post' SELECT * FROM sys.sysdatabases WHERE name = 'Blog' -- confirm that table Post and database Blog are being change tracked SELECT * FROM sys.change_tracking_tables SELECT * FROM sys.change_tracking_databases -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- update post UPDATE Post SET Title = 'First Post Title Changed' WHERE Title = 'First Post Title'; -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- see changes since version 0 (initial) SELECT p.Title, c.PostId, SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT FROM CHANGETABLE(CHANGES Post, 0) AS c LEFT OUTER JOIN Post AS p ON p.PostId = c.PostId; -- is column Title of table Post changed since version 0? SELECT CHANGE_TRACKING_IS_COLUMN_IN_MASK(COLUMNPROPERTY(OBJECT_ID('Post'), 'Title', 'ColumnId'), (SELECT SYS_CHANGE_COLUMNS FROM CHANGETABLE(CHANGES Post, 0) AS c)) -- get current version SELECT CHANGE_TRACKING_CURRENT_VERSION() -- disable change tracking for table Post ALTER TABLE Post DISABLE CHANGE_TRACKING; -- disable change tracking for database Blog ALTER DATABASE Blog SET CHANGE_TRACKING = OFF; You can read about the differences between the two options here. Choose the one that best suits your needs! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • How to make your File Adapter pick only one file at a time from a location

    - by anirudh.pucha(at)oracle.com
    In SOA 11g, you use File adapter to read files from the given location.With this read operation it picks all the files at time.You want to configure File Adapters that it should pick one file at time from the given location with given polling interval.Solution :You set the "SingleThreadModel" and "MaxRaiseSize" properties for your file adapter. Edit the adapter's jca file and add the following properties:property name="SingleThreadModel" value="true"property name="MaxRaiseSize" value="1"You can set these properties also through jdeveloper, by opening composite.xml, selecting the adapter and then changing the properties through the properties panel.

    Read the article

  • Fixing a SkyDrive Sync Disaster

    - by Rick Strahl
    For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly! The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months. Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders. To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like: If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files. Not exactly how I imagined my synching would work. At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong. I quickly realized that doing this by hand would be too hard for the large number of files that got hosed. Hacking together a small .NET Utility So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files. What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files: I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two. I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few). To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing. For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number. Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile. Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it. This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts. Hope this helps somebody out so you can avoid wasting the better part of a full work day on this… Resources Download the Source Code and Binaries for SkyDrive Rescue© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A small, intra-app Object to String Serializer

    - by Rick Strahl
    On a few occasions I've needed a very compact serializer for small and simple, flat object serialization, typically for storage in Cookies or a FormsAuthentication ticket in ASP.NET. XML and JSON serialization are too verbose for those scenarios so a simple property serializer that strings together the values was needed. Originally I did this by hand, but here is a class that automates the process.

    Read the article

  • apt-get fails to upgrade, install, remove etc

    - by Kieran Peat
    I upgraded from 11.10 to 12.04, had no issues that I noticed. Recently tried to install something via software center, but it was throwing errors. Changed to trying to sudo apt-get install instead but again no luck. I've genuinely tried as much as I know to fix this, but I can't so I figured I'd ask here. I've done sudo apt-get update successfully but sudo apt-get upgrade failed with... You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is installed E: Unmet dependencies. Try using -f. Using sudo apt-get -f install... The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 33 not upgraded. 20 not fully installed or removed. Need to get 0 B/3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libssl1.0.0 I've tried sudo apt-get remove libssl1.0.0 and sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). I've also tried sudo apt-get dist-upgrade, sudo apt-get autoremove etc without any luck. I also tried to download the .deb and use dpkg -i, but that failed and did not fully understand the method to be honest. Edit This is in response to the comments ref: sudo apt-get install -f doesn't fix broken packages. And now? sudo dpkg --configure -a --force-all dpkg: error processing libssl1.0.0 (--configure): libssl1.0.0:amd64 1.0.1-4ubuntu5.2 cannot be configured because libssl1.0.0:i386 is in a different version (1.0.0e-2ubuntu4.6) dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: also configuring `libssl1.0.0:i386' (required by `ia32-libs-multiarch:i386') dpkg: error processing libssl1.0.0:i386 (--configure): libssl1.0.0:i386 1.0.0e-2ubuntu4.6 cannot be configured because libssl1.0.0:amd64 is in a different version (1.0.1-4ubuntu5.2) dpkg: too many errors, stopping Errors were encountered while processing: libssl1.0.0 libssl1.0.0:i386 ... libssl1.0.0:i386 Processing was halted because there were too many errors. Ref: Package manager doesn't work anymore moving /var/lib/kpkg/info/libssl.. kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:i386.postinst /var/lib/dpkg/info/libssl1.0.0:i386.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo mv /var/lib/dpkg/info/libssl1.0.0:amd64.postinst /var/lib/dpkg/info/libssl1.0.0:amd64.postinst.bad kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl Reading package lists... Done Building dependency tree Reading state information... Done Package libssl is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libssl' has no installation candidate kieran@kieran-EX58-UD3R:~$ sudo apt-get --reinstall install libssl1.0.0 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libssl1.0.0 : Breaks: libssl1.0.0:i386 (!= 1.0.1-4ubuntu5.2) but 1.0.0e-2ubuntu4.6 is to be installed libssl1.0.0:i386 : Breaks: libssl1.0.0 (!= 1.0.0e-2ubuntu4.6) but 1.0.1-4ubuntu5.2 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). kieran@kieran-EX58-UD3R:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libgtkmm-2.4-1c2a libgtkhtml3.14-19 libglade2-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libqtcore4:i386 libssl1.0.0:i386 The following NEW packages will be installed libqtcore4:i386 The following packages will be upgraded: libssl1.0.0:i386 1 upgraded, 1 newly installed, 0 to remove and 58 not upgraded. 20 not fully installed or removed. Need to get 3,063 kB of archives. After this operation, 9,044 kB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libssl1.0.0 i386 1.0.1-4ubuntu5.2 [1,002 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main libqtcore4 i386 4:4.8.1-0ubuntu4.1 [2,061 kB] Fetched 3,063 kB in 4s (731 kB/s) E: Internal Error, No file name for libssl1.0.0 ref: libssl Dependencies removing libssl1.0.0:i386 kieran@kieran-EX58-UD3R:~$ sudo apt-get remove libssl1.0.0:i386 Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies. ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed Depends: libssl1.0.0:i386 but it is not going to be installed libcurl3:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed libsasl2-modules:i386 : Depends: libssl1.0.0:i386 (>= 1.0.0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • Don’t miss the Receiving Webcast on November 20th

    - by user793553
    This one-hour session is recommended for technical and functional users who are interested to know about the Receiving transactions and its debugging techniques. TOPICS WILL INCLUDE: Using generic diagnostic scripts. How to read debug logs in receiving. Data flow for various document types (PO, RMA, ISO, IOT) to help debug issues Receiving Transaction processor Generic datafixes.  See DocID 1456150.1 to sign up now!

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Is there an SO API which can fetch all Questions & Answers for a particluar Keywords

    - by user4203
    I am looking for an API which helps in fetching all the Questions & Answers from SO and other Stack Exchange sites only on a particular "keyword". Later using XML RPC these questions will be posted as blog post and answers to this post's answers. Just wondering whether it's possible with an API. One of my friend suggested that we should Scrape but i don't want screen scraping instead i am looking for API requests which should handle this.

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >