Search Results

Search found 20933 results on 838 pages for 'jeff post'.

Page 256/838 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • San Francisco DotNetNuke User's Group

    If you are anywhere in the San Francisco Bay or Silicon Valley area this post is for you. Others are welcome, but you might find the drive a little long depending on where you are. On 3/23/2010 we are going to be holding our first DotNetNuke Users Group...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Pack of resources in one big file with XNA

    - by Cristian
    Is it possible to pack all the little .xnb files into one big file? Given the level of abstraction of the XNA Framework I though this would come out of the box but I can't find any well integrated solution. So far the best candidate is XnaZip but in addition to having to compile the resources in a post-build event, and a little trouble porting the game to XBOX I have to rename all the references to resources I have already implemented.

    Read the article

  • Setting up a mail server to send mail from IP

    - by Manishearth
    I have a fixed LAN IP, but no domain name. I'd like to be able to send emails within the LAN, and receive mails sent to my IP (user needs to send an email to [email protected]). I've tried the stuff in this post -- the "easy" one gives me an IMAP error in squirrelmail (IMAP is open and listening, but not working), and the "hard" one seems to be outdated. Is it possible to set up an email server (preferably on 12.04) without having a domain name?

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • DotNetNuke 5.4 Released

    Another month, another release of DotNetNuke! Check out version 5.4.0 that was just released a few hours ago. Joe Brinkman has a full blog post about the release here The two biggest things are some features that were added into DNN 5.3 Professional Edition...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server v.Next (Denali) : Another SSMS bug that should be fixed

    - by AaronBertrand
    Sorry to call this out in a separate post (I talked about a bunch of SSMS Connect items the other day), but Aaron Nelson ( blog | twitter ) jogged my memory today about an issue that has gone unfixed for years: the custom coloring for Registered Servers is neither consistent nor global. For one of my servers, I've chosen a red color to show in the status bar. Let's pretend this is a production server, and I want the red to remind me to use caution. I can set this up by right-clicking a Registered...(read more)

    Read the article

  • PowerPivot, Parent/Child and Unary Operators

    - by AlbertoFerrari
    Following my last post about parent/child hierarchies in PowerPivot, I worked a bit more to implement a very useful feature of Parent/Child hierarchies in SSAS which is obviously missing in PowerPivot, i.e. unary operators. A unary operator is simply the aggregation function that needs to be used to aggregate values of children over their parent. Unary operators are very useful in accountings where you might have incomes and expenses in the same hierarchy and, at the total level, you want to subtract...(read more)

    Read the article

  • How To - Securing a JAX-WS with OWSM Message Protection Policy in JDeveloper - 11g

    - by Prakash Yamuna
    As promised in this post, here is a How-To that describes how to secure a simple HelloWorld JAX-WS with OWSM message protection policy and test it with SOAP UI. The How-To reuses the picture I posted earlier about the relationship and interplay b/w Keystore, Credential store, jps-config.xml ,etc. One of the other more frequent requests I hear from folks within Oracle and customers is how to test OWSM with SOAP UI. SOAP UI in general works very well as testing tool for web services secure with wss10 policies.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • An XEvent a Day (26 of 31) – Configuring Session Options

    - by Jonathan Kehayias
    There are 7 Session level options that can be configured in Extended Events that affect the way an Event Session operates.  These options can impact performance and should be considered when configuring an Event Session.  I have made use of a few of these periodically throughout this months blog posts, and in today’s blog post I’ll cover each of the options separately, and provide further information about their usage.  Mike Wachal from the Extended Events team at Microsoft, talked...(read more)

    Read the article

  • User connection management in Reporting Services configuration

    - by Testas
    IT professionals will use Reporting Services Configuration Manager to perform post installation tasks for SQL Server Reporting Services. Introduced in SQL Server 2005, Reporting Services Configuration Manager provides an intuitive interface to perform tasks including specifying the report server database, report manager url, and indeed one of the first post installation tasks that should be performed is backing up the encryption keys that are used to protect the sensitive information within the rdl files.  Many of the options that are selected within Reporting Services Configuration Manager are written to a number of configuration files including the rsreportserver.config file located in C:\Program Files\Microsoft SQL Server\Report Server InstanceName\Reporting Services\ReportServer folder.When opening this file you will notice that there are more configuration settings within the rsreportserver.config file than is available through the Reporting Services Configuration Manager Interface. As a result there are additional configuration options that can be defined within this file.  A customer was having a problem performing stress tests against a new Report Server that would be going live for an enterprise reporting system. One aspect of the stress test was to fire 50 connections from a single user account. When performing the stress test an error described that the maximum active request had been exceeded. Within the rsreportserver.config, there is a key that is added to the file:  <Add Key=”MaxActiveReqForOneUser” Value=”20”/>  Changing the value from 20 to 50 accommodated the needs of the stress test, however, a wider question should be asked pertaining to this setting when implementing Reporting Services to a production environment. Within an intranet environment, the default setting is appropriate when network bandwidth is high, users are known and demand for reports is particularly high from a group of users.  However, when deploying a Reporting Server solution to an extranet, or the internet, you may want to consider reducing this setting to reduce to scope of connections that can be acquired by a single user and placing unnecessary pressure on the report server. I do hope that Reporting Services Configuration Manager evolves to include an advanced page that includes an intuitive interface to change configuration settings such as the MaxActiveReqForOneUser, and also configure rendering and data extensions and define secure connection levels to the report server. All these options can be configured within the rsreportserver.config file, and these are setting that customers would like to see in Reporting Services Configuration Manager in the future.   If you think that the SQL community would benefit from this addition, you can vote on it at Microsoft Connect  https://connect.microsoft.com/SQLServer/feedback/details/565575/extending-reporting-services-configuration-manager-rscm    

    Read the article

  • Using RTL languages with MS Office in Wine 1.4

    - by saeed hardan
    I've installed MS Office 2007 in Ubuntu 12.04 using Wine 1.4 with no problems, and it works fine with the English Language. However, I need to use it to work with Arabic and Hebrew, and it doesn't work when I switch to a Hebrew or Arabic keyboard. The typing gets reversed. I saw an earlier post for something similar, but it is closed and I think it was for the earlier Wine 1.3. Supposedly Wine 1.4 has added RTL -- is there a way to get it working?

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 20-26, 2013

    - by OTN ArchBeat
    What are the 4,460 fans of the OTN ArchBeat Facebook Page talking about? The list below represents the Top 10 most popular articles, blog posts, and other content from across the community. Enterprise Grade Deployment Considerations for Oracle Identity Manager AD Connector | Firdaus Fraz Oracle Fusion Middleware solution architect Firdaus Fraz illustrates provides best practice recommendations for setting up an enterprise deployment environment for the OIM connector for Microsoft Active Directory. A Roadmap for SOA Development and Delivery | Mark Nelson Do you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. The road ahead for WebLogic 12c | Edwin Biemond Oracle ACE Edwin Biemond shares his thoughts on announced new features in Oracle WebLogic 12.1.3 & 12.1.4 and compares those upcoming releases to Oracle WebLogic 12.1.2. Oracle GoldenGate 12c - New Release, New Features | Michael Rainey Rittman Mead's Michael Rainey takes you on guided tour through the GoldenGate 12c features that "are relevant to data warehouse and data migration work we typically see in the business intelligence world." Reproducing WebLogic Stuck Threads with ADF CreateInsert Operation and ORDER BY Clause | Andrejus Baranovsikis Another post from Oracle ACE Director Andrejus Baranovsikis on dealing with WebLogic Stuck Threads. This one includes a test case application you can download. The Impact of SaaS - The Times They Are A-Changin' | Floyd Teter Oracle ACE Director Floyd Teter shares some truly interesting insight gained in conversations with three Fortune 500 CIOs. Configure Oracle Identity Manager AD/LDAP Authentication | Arda Eralp A step-by-step how-to from a member of the Fusion Middleware Applications Consultancy team. Java-Powered Robot Named NAO Wows Crowds | Tori Wieldt Tori Wieldt interviews a robot and human. Updated ODI Statement of Direction | Robert Schweighardt Heads up Oracle Data Integrator fans! A new product statement of direction document is available, offering "an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB)." Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 2: Setup and Configuration | Michael Rainey Michael Rainey continues his series with another technical article for you GoldenGate fans. Thought for the Day "Intuition will tell the thinking mind where to look next." — Jonas Salk, American medical researcher and virologist (October 28, 1914 – June 23, 1995) Source: brainyquote.com

    Read the article

  • adding custom SSIS transformation to visual studio toolbox fails

    - by ryangaraygay
    Just very recently I encountered an issue in deploying a custom SSIS component assembly which turns out to be a relative "no-brainer" error if only the clues were more straightforward. Basically after deploying the assembly I could not find my component listed in the "SSIS Data Flow Items" tab list.It turns out that the assembly containing the component just had missing or referenced the incorrect assemblies.I have outlined the steps I took that guided me on the right direction on this blog post of mine : adding custom SSIS transformation to visual studio toolbox fails 

    Read the article

  • Are IE 9 will have a place in heart of user ?

    - by anirudha
    in a advertisement of IE 9 MSFT compare two product first is their IE9 and second is chrome 6. I know 6 is not currently [9] but no objection because may be they make ads when 6 is currently version and have RC or beta in their hands. on IE 9 test-drive website they show many of people ads to show the user that IE9 is performance better or other chrome or Firefox not. well they not compare with Firefox because last days firefox not still in news and search trends like before RC release many of user googling for them. Well I myself found IE9 perform smoother then chrome. but what MSFT do after IE9 nothing they waiting for IE 10 not for give updates not as well as Google chrome and Firefox. Are IE9 have anything new for Developer even a small or big. well they tell you blah or useless things everytime when they make for next version no matter for you but a matter for them because they add a new thing even useless for developer. I am not have any feeling with IE bad but I like to make reviews as well as I can make. I show you something who I experience with IE and someother browser like Chrome and Firefox. IE 9 still have no plugin as well as other provided like Firefox have Firebug a great utilities who is best option for developer to debug their code. IE9 developer tool is good but still you never customize them or readymade customization available to work as in firefox many of person make customization for firebug like example :- firepicker for picking color in firebug , firebug autocomplete for intellisense like feature when you write JavaScript inside console panel , pixelperfect , firequery , sitepoint reference and many other great example we all love to use. as other things that Firefox give many things customizable like themes , ui and many thing customization means more thing user or developer want to make themselves and more contribution make them better software so Firefox is great because customization is a great thing inside firefox and chrome. if you read some post of developer on MSDN to what’s new in IE 9 developer tool that you feel they are joking whenever you see some other things of Firefox and chrome. in a Firefox a plugin perform many much things but in IE still use IE 9 developer tool no other option like in Firefox use Firebug and many other utilities to make development easier and time saving and best as we can do.if you see Firefox page on mozilla that sublines of firefox is high performance easy customization advanced security well you can say what’s performance but there is no comparison with IE because IE have only performance and nothing else. but Firefox have these three thing to make product love. and third thing I really love that security yeah security. from long time before whenever IE6 is no hackproff and many other easily hack IE6 whenever Firefox is secure. I found myself that many of website install a software on client’s computer and they still not know about them so they track everything. sometime they hijack the homepage and make their website as their homepage. sometime they do something and you trying  to go to  any website then they go to their site first. the problem I telling about not long before it’s time of late in 2008 whenever Firefox is much better then IE6. if someone have bad experience with anyone of these software share with us I like to hear your voice. whenever IE still not for use Firefox is a good option for us even user or developer. I not know why someone make next version of IE. IE still have time to go away from Web. Firefox not rude as IE they still believe in user feedback and chrome is also open the door for feedback on their product gooogle Chrome. but what thing they made in IE on user feedback nothing. they still thing to teach what they maked not thing about what user need. if you spent some hour on firefox and chrome then you found what’s matter. what thing you have whenever you use IE or other browser like google chrome and Firefox :- as a user IE give you nothing even tell you blah blah and more blah but still next version of IE means next IE6 for the web. as in Google chrome you find plugins addons or customization to make experience better but in IE9 you can’t customize anything even the themes they have by default. Firefox already have a great list of plugins or addons to make experience better with Web but IE9 have nothing. this means IE9 not for user and other like chrome and firefox give you much better experience then IE. next thing after user is developer. first thing is that all developer want smooth development who save their time not take too perhaps saving.posts on IE9 show that a list of thing improved in IE 9 developer tool but are one developer tool enough for web development so developer need more utilities to solve different different type of puzzle who IE 9 never give like in Firefox you have utilities to do a task even small or big one. in chrome same experience you have but IE9 never give any plugin or utilities to make our work faster even they are new headache for developer because IE not give update as soon as other because in Firefox and in chrome if a bug is reported then they solve them fast and distribute them in next version of software very soon but in IE wait for a long time like IE 9 and IE 8 have no official release between them as update. As my conclusion there is no reason to use IE and adopt 9 again. it’s really not for Developer or user even newbie or smart people. as a rule I want to beware you with IE because it’s my responsibilities to move the thing in good way as I can make. well are you sure that there is no reason or profit they thing to have with IE9  if not why they forget luna [windows xp] user. because they are old nothing they want to force user to give them some money by purchasing a new version of OS. so this a thing why they marketed their software. if you thing about what firefox and chrome want to make : Mozilla's mission is to promote openness, innovation and opportunity on the web. chrome mission we all see whenever we use them. but IE9 is a trick they promote because they want to add something to next version of windows. if somebody like IE9 [even surprised by ads they see or post they read] then they purchase windows soon as they possible. Well you feel that I am opposition of IE9 and favor of chrome and Firefox yeah you feel right I hate IE from a heart not from a pencil. well you get same thing when you have trying three product major I described here Chrome firefox and IE. well don’t believe on the blogs , posts or article who are provided by the merchant or vender’s website. open the eyes read and thing what they talk and feel are they really true. if you confused that compare with some other. now you know the true because no one telling so badly as a user can described who use them not only one who make their feature. always open the eyes don’t believe use your mind and find the truth. thanks for reading my post good bye and take care

    Read the article

  • Can anybody recommend C#/XAML Windows Store Development aggregator sites?

    - by Clay Shannon
    I used to have a couple of sites bookmarked that were Windows development article/blog post aggregators. I can't recall what they were called. What I want to do now is to keep up with all relevant C#/XAML "Windows Store" app development info, whether it be blog posts, new "Metro"-specific channel 9 videos, etc., without spending lots of time surfing about. Can anybody recommend any "C#/XAML Windows Store new information aggregators"?

    Read the article

  • Useful links for InfoPath form development

    - by ybbest
    Powershell http://msdn.microsoft.com/en-us/library/ms442691.aspx http://technet.microsoft.com/en-us/library/ee806878.aspx http://sharepoint.microsoft.com/blogs/zach/Lists/Posts/Post.aspx?ID=7 InfoPath http://msdn.microsoft.com/en-us/library/aa943232.aspx http://blah.winsmarts.com/2008-8-Deploying_InfoPath_2007_Forms_to_Forms_Server_-and-ndash_Properly.aspx http://msdn.microsoft.com/en-us/library/microsoft.office.infopath.server.administration.xsnfeaturereceiver.aspx http://www.articlesbase.com/web-hosting-articles/sharepoint-hosting-3-ways-to-deploy-infopath-form-templates-to-sharepoint-2605874.html http://jopx.blogspot.com/2006/08/infopath-2007-solving-xsn-can-not-be.html

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • Exception Handling And Other Contentious Political Topics

    - by Justin Jones
    So about three years ago, around the time of my last blog post, I promised a friend I would write this post. Keeping promises is a good thing, and this is my first step towards easing back into regular blogging. I fully expect him to return from Pennsylvania to buy me a beer over this. However, it’s been an… ahem… eventful three years or so, and blogging, unfortunately, got pushed to the back burner on my priority list, along with a few other career minded activities. Now that the personal drama of the past three years is more or less resolved, it’s time to put a few things back on the front burner. What I consider to be proper exception handling practices is relatively well known these days. There are plenty of blog posts out there already on this topic which more or less echo my opinions on this topic. I’ll try to include a few links at the bottom of the post. Several years ago I had an argument with a co-worker who posited that exceptions should be caught at every level and logged. This might seem like sanity on the surface, but the resulting error log looked something like this: Error: System.SomeException Followed by small stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace.   These were all the same exception. The problem with this approach is that the error log, if you run any kind of analytics on in, becomes skewed depending on how far up the stack trace your exception was thrown. To mitigate this problem, we came up with the concept of the “PreLoggedException”. Basically, we would log the exception at the very top level and subsequently throw the exception back up the stack encapsulated in this pre-logged type, which our logging system knew to ignore. Now the error log looked like this: Error: System.SomeException Followed by small stack trace. Much cleaner, right? Well, there’s still a problem. When your exception happens in production and you go about trying to figure out what happened, you’ve lost more or less all context for where and how this exception was thrown, because all you really know is what method it was thrown in, but really nothing about who was calling the method or why. What gives you this clue is the entire stack trace, which we’re losing here. I believe that was further mitigated by having the logging system pull a system stack trace and add it to the log entry, but what you’re actually getting is the stack for how you got to the logging code. You’re still losing context about the actual error. Not to mention you’re executing a whole slew of catch blocks which are sloooooooowwwww……… In other words, we started with a bad idea and kept band-aiding it until it didn’t suck quite so bad. When I argued for not catching exceptions at every level but rather catching them following a certain set of rules, my co-worker warned me “do yourself a favor, never express that view in any future interviews.” I suppose this is my ultimate dismissal of that advice, but I’m not too worried. My approach for exception handling follows three basic rules: Only catch an exception if 1. You can do something about it. 2. You can add useful information to it. 3. You’re at an application boundary. Here’s what that means: 1. Only catch an exception if you can do something about it. We’ll start with a trivial example of a login system that uses a file. Please, never actually do this in production code, it’s just concocted example. So if our code goes to open a file and the file isn’t there, we get a FileNotFound exception. If the calling code doesn’t know what to do with this, it should bubble up. However, if we know how to create the file from scratch we can create the file and continue on our merry way. When you run into situations like this though, What should really run through your head is “How can I avoid handling an exception at all?” In this case, it’s a trivial matter to simply check for the existence of the file before trying to open it. If we detect that the file isn’t there, we can accomplish the same thing without having to handle in in a catch block. 2. Only catch an exception if you can do something about it. Continuing with the poorly thought out file based login system we contrived in part 1, if the code calls a Login(…) method and the FileNotFound exception is thrown higher up the stack, the code that calls Login must account for a FileNotFound exception. This is kind of counterintuitive because the calling code should not need to know the internals of the Login method, and the data file is an implementation detail. What makes more sense, assuming that we didn’t implement any of the good advice from step 1, is for Login to catch the FileNotFound exception and wrap it in a new exception. For argument’s sake we’ll say LoginSystemFailureException. (Sorry, couldn’t think of anything better at the moment.) This gives us two stack traces, preserving the original stack trace in the inner exception, and also is much more informative to the calling code. 3. Only catch an exception if you’re at an application boundary. At some point we have to catch all the exceptions, even the ones we don’t know what to do with. WinForms, ASP.Net, and most other UI technologies have some kind of built in mechanism for catching unhandled exceptions without fatally terminating the application. It’s still a good idea to somehow gracefully exit the application in this case if possible though, because you can no longer be sure what state your application is in, but nothing annoys a user more than an application just exploding. These unhandled exceptions need to be logged, and this is a good place to catch them. Ideally you never want this option to be exercised, but code as though it will be. When you log these exceptions, give them a “Fatal” status (e.g. Log4Net) and make sure these bugs get handled in your next release. That’s it in a nutshell. If you do it right each exception will only get logged once and with the largest stack trace possible which will make those 2am emergency severity 1 debugging sessions much shorter and less frustrating. Here’s a few people who also have interesting things to say on this topic:  http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx http://www.codeproject.com/Articles/9538/Exception-Handling-Best-Practices-in-NET I know there’s more but I can’t find them at the moment.

    Read the article

  • MOSS 2007 WSP Retraction 'Error"

    - by juanlarios
    This one is a quick post , but I thought I would post this information as I could not find anything that helped me on this specific scenario. Please read the entire article before taking action as there are some irreversable or very troublesome routes I caution about! Problem: I had a client trying to retract a WSP from Central Admin and would eventually go to an, 'Error' State. I could not retract it and after looking at event logs I figured it was a problem with security. I tried several accounts, checked the databases to see if there was some issue with readonly databases and nothing was working.   Solution: Delete the solution from central admin! Yes, I said it. With StsAdm , just delete the solution from Central Admin using this command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deletesolution -name "yoursolution.wsp" What has just happened is that Central Admin does not know about the WSP anymore but the feature and any deployed files are still on the server. For whatever reason SharePoint was not able to retract the files as it normally does. Now you can do one of two things, you can add the solution again to central admin and deploy overtop of the deployed files so it overrides them, or simply clean up the files manually. I re-added the solution through stsadm, but then deployed through stsadm using the -force option in the command. This overrides the existing files on the server. If you deploy through Central admin it will tell you you need the -force option that is not offered as part of the UI in central admin. Use the following command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deploysolution -name "YourSolution.wsp" -immediate -allowgacdeployment -force Just to make sure everything was good, I retracted to solution again, and it worked! then I deleted the solution from central admin alltogether. Then I checked the server and noticed all the files that were deployed with the WSP were cleaned up properly. I then re-added the new WSP the client was looking to install (an Updated WSP). Conclusion: I have no idea why it was not able to retract, but I have seen this several times. I don't know if has to do with security of certain accounts. Althought it's anoying at times, it is fairly easy to fix if you have good instructions. Hope it helps you out!   ***WORD OF CAUTION - if you clean up the files manually you might want to uninstall the features through STSADM commands as SharePOint might still recognize the features that were deployed as the WSP. You might not want to get into the mess of deleting files that are still part of activated or installed Features. THis is why I suggest doing what I did.

    Read the article

  • Cannot get 3D OpenGL support in Vmware guests, how can I fix this?

    - by jjapol
    I have been working at this problem for 2 days now. I cannot for the life of me enable 3D support in VMWare 9 guests. My specifications are: Hardware: Dell Latitude E5520 laptop. Processor: Intel i7-2620M CPU @ 2.70GHz × 4. Memory: 8GB. Video: Intel Sandybridge Mobile x86/MMX/SSE2 OS: Ubuntu 12.04.1 LTS, 32 bit. Vmware Workstation: 9.0.1 build-894247 Glxgears functions fine. Frame rate is ~60fps. Vmware guest: Windows 7 Starting the Windows 7 guest in VMware throws the following errors: No 3D support is available from the host. and Hardware graphics acceleration is not available. I've read through this VMware forum thread, but again the hardware in the post is different (nVidia). I've followed the instructions at this Ask Ubuntu post as closely as possible as the question is nearly the same as mine although my hardware is different. Answer 1 regarding setting mks.gl.allowBlacklistedDrivers = TRUE; in my vmx configuration file causes the VM to crash when it starts. The second answer I followed as closely as possible. I uninstalled VMware, Did sudo apt-get install build-essential linux-headers-$(uname -r) at a terminal, Added the PPA https://launchpad.net/~glasen/+archive/intel-driver, Then at a terminal did sudo apt-get update && sudo apt-get upgrade -y I reinstalled VMware and have the same results: no 3D in guests. I'm getting the feeling that something is awry with the Sandy Bridge driver, but I can't seem to come up with any solutions. Has anyone out there run across this problem also? By the way, the operation of the likes of Solidworks and AutoCad within a Windpws 7 guest does appear to be improved in VMware 9 vs VMware 8 in spite of the fact that 3D support is lacking in the Windows 7 guest. I'd also add that my glxinfo file was nearly identical to the glxinfo file posted at askubuntu.com/questions/181829/…. I had a total of seven minor differences per a comparison using Meld. –

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >